[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Live migration with MMIO pages



On 2/11/07 10:42, "Keir Fraser" <keir@xxxxxxxxxxxxx> wrote:

> I also note that guarding the mark-dirty-or-zap-writable-bit with
> mfn_valid() is not really correct. mfn_valid() only checks whether the mfn <
> max_page. I bet this would not work if you migrate on a machine with 4GB of
> RAM, as the MMIO hole will be below max_page. Really mfn_valid needs to
> handle such MMIO holes, or the shadow code needs to be using a test other
> than mfn_valid in many places (e.g., the function iomem_page_test() that you
> added before).

Actually Tim reckons that use of mfn_valid() is okay because although you
will lose the _PAGE_RW bit on your mmio mapping temporarily, if the mmio mfn
is less than max_page, you've fixed up the fault path now so that you should
get the _PAGE_RW bit back again when the guest attempts a write access.

However, we think that the !mfn_valid() test that gates adding
_PAGE_PAT|PAGE_PCD|_PAGE_PWT to the passthru flags should go away. We'll
already have validated those flags even for ordinary RAM mappings for a PV
guest, and there are cases where cache attributes have to be different for
RAM pages. So probably the test should unconditionally pass through those
flags if the domain is !shadow_mode_refcounts.

 -- Keir




_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.