[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Error restoring DomU when using GPLPV





Keir Fraser wrote:
On 04/08/2009 12:34, "James Harper" <james.harper@xxxxxxxxxxxxxxxx> wrote:

Like I said before -- unmapping the gnttab pages I think will not help
you
for live migration, but I suppose it is a reasonable thing to do
anyway. For
live migration I think xc_domain_save needs t get a bit smarter about
Xenheap pages in HVM guests.
Understood. Do you have any idea about why it worked fine under 3.3.x
but not 3.4.x?

The bit of code in 3.3's xc_domain_save.c that is commented "Skip PFNs that
aren't really there" is removed in 3.4. That will be the reason.

 -- Keir

Hi,

I started looking at this couple days ago, and finally understand
what's going on. In our case, win migration/save-restore just fails, as
Annie/Wayne had posted.

In the short run, since frames for vga etc are skipped anyways, can we
just put the above change back in libxc (xen 3.4) and be ok?

thanks,
Mukesh


changeset:   18383:dade7f0bdc8d
user:        Keir Fraser <keir.fraser@xxxxxxxxxx>
date:        Wed Aug 27 14:53:39 2008 +0100
summary:     hvm: Use main memory for video memory.

diff -r 2397555ebcc2 -r dade7f0bdc8d tools/libxc/xc_domain_save.c
--- a/tools/libxc/xc_domain_save.c      Wed Aug 27 13:31:01 2008 +0100
+++ b/tools/libxc/xc_domain_save.c      Wed Aug 27 14:53:39 2008 +0100
@@ -1111,12 +1111,6 @@
                        (test_bit(n, to_fix)  && last_iter)) )
                     continue;

-                /* Skip PFNs that aren't really there */
-                if ( hvm && ((n >= 0xa0 && n < 0xc0) /* VGA hole */
-                             || (n >= (HVM_BELOW_4G_MMIO_START >> PAGE_SHIFT)
-                                 && n < (1ULL<<32) >> PAGE_SHIFT)) /* MMIO */ )
-                    continue;
-

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.