[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] Paging and memory sharing for HVM guests



>>> Grzegorz Milos <gm281@xxxxxxxxx> 17.12.09 00:14 >>>
>The series of 46 patches attached to this email contain the initial
>implementation of memory paging and sharing for Xen. Patrick Colp
>leads the work on the pager, and I am mostly responsible for memory
>sharing. We would be grateful for any comments/suggestions you might
>have. Individual patches are labeled with comments describing their
>purpose and a sign-off footnote. Of course we are happy to discuss
>them in more detail, as required. Assuming that there are no major
>objections against including them in the mainstream xen-unstable tree,
>we would like to move future development to that tree.

An overview description of the design would be nice, to have a basic
understanding before looking at the individual patches. In particular,
from a first brief look, I'm having the impression that only HVM guests'
pages can be subject to paging.

On the Linux patches:

Introducing another bogus failure indicator for the mmap_batch
privcmd operations seems rather undesirable - we'll already need to
find a backwards-compatible solution to the current (broken) or-ing
in of 0xf0000000 (broken because MFNs can now be more than
28 bits wide).

Using msleep() with hard-coded values (in at least one case even
contradicting the accompanying comment) seems more like a hack
than a permanent solution. Can't there be some signaling done, or
can't there alternatively be a polling hypercall?

Removing support for IOCTL_PRIVCMD_MMAP from the pv-ops
implementation seems pretty unrelated, so should probably be a
separate patch.

Also, most of the patches seem to use blanks instead of tabs for
indentation, and occasionally other non-standard formatting.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.