[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v3 8/9] libxc: rework of domain builder's page table handler



On 10/29/2015 03:02 PM, Wei Liu wrote:
On Thu, Oct 29, 2015 at 02:18:31PM +0100, Juergen Gross wrote:
On 10/29/2015 01:48 PM, Wei Liu wrote:
On Tue, Oct 13, 2015 at 03:11:17PM +0200, Juergen Gross wrote:
In order to prepare a p2m list outside of the initial kernel mapping
do a rework of the domain builder's page table handler. The goal is
to be able to use common helpers for page table allocation and setup
for initial kernel page tables and page tables mapping the p2m list.
This is achieved by supporting multiple mapping areas. The mapped
virtual addresses of the single areas must not overlap, while the
page tables of a new area added might already be partially present.
Especially the top level page table is existing only once, of course.


Currently restrict the number of mappings to 1 because the only mapping
now is the initial mapping created by toolstack. There should not be
behaviour change and guest visible change introduced.

If my understanding is correct, can yo please add that to the commit
message?

Sure.

I'm currently thinking about changing this patch even further. While
doing similar work in grub-xen I found the page table building there
to be much more generic and more compact. Instead of open coding the
different page table levels all is done there in a loop over those
levels. The main loop consists of only 33 lines (and this is after
adding support of multiple mapping areas)!

What do you think?


That's of course OK. It's patch reviewing for me anyway. One patch is as
good as another.

You haven't seen the coding yet. ;-)

Given this is a particularly thorny area, a second eye would be much
appreciated.

On the positive side: any bug in this code should be really easy to
spot, as the domain wouldn't be able to work reliably (this was at least
my experience when developing this patch and the related one in grub).

Some comments below.

[...]
+    from = round_down(from, mask);
+    to = round_up(to, mask);
+
+    for ( map_n = 0, map_cmp = domx86->maps; map_n < domx86->n_mappings;
+          map_n++, map_cmp++ )
      {
-        start = round_down(start, mask);
-        end = round_up(end, mask);
-        tables = ((end - start) >> bits) + 1;
+        if ( map_cmp->lvls[lvl].from == map_cmp->lvls[lvl].to )
+            continue;
+        if ( from >= map_cmp->lvls[lvl].from && to <= map_cmp->lvls[lvl].to )
+            return;  /* Area already completely covered on this level. */
+        if ( from >= map_cmp->lvls[lvl].from && from <= map_cmp->lvls[lvl].to )
+            from = map_cmp->lvls[lvl].to + 1;
+        if ( to >= map_cmp->lvls[lvl].from && to <= map_cmp->lvls[lvl].to )
+            to = map_cmp->lvls[lvl].from - 1;

Is it possible that later mapping covers the previous ones? How is that
handled?

I'm not sure I understand your question.

In case you are asking whether different mappings are allowed to overlap
in terms of virtual addresses: no. I haven't added any checking code to
ensure this. I can do it, if you want.


As I understand it the sentence of "different mappings are not allowed
to overlap" is the requirement, not the current status.

It doesn't make sense to support this. It would require to map multiple
pfns to the same virtual address. This is impossible. So yes, this is a
requirement.

I'll add some code to make sure overlapping does not occur.

The above code snippet is used to ensure different mappings don't
overlap, but I sense there one case missing,

    from < map_cmp->lvls[lvl].from && to > map_cmp->lvls[lvl].to

If this is not expected we should probably add assert here; otherwise
that case needs to be handled as well.

This can only happen if the virtual areas are overlapping.


Juergen

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.