[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [Xen-devel] Error restoring DomU when using GPLPV



I think I've tracked down the cause of this problem
in the hypervisor, but am unsure how to best fix it.

In tools/libxc/xc_domain_save.c, the static variable p2m_size
is said to be "number of pfns this guest has (i.e. number of
entries in the P2M)".  But apparently p2m_size is getting
set to a very large number (0x100000) regardless of the
maximum psuedophysical memory for the hvm guest.  As a result,
some "magic" pages in the 0xf0000-0xfefff range are getting
placed in the save file.  But since they are not "real"
pages, the restore process runs beyond the maximum number
of physical pages allowed for the domain and fails.
(The gpfn of the last 24 pages saved are f2020, fc000-fc012,
feffb, feffc, feffd, feffe.)

p2m_size is set in "save" with a call to a memory_op hypercall
(XENMEM_maximum_gpfn) which for an hvm domain returns
d->arch.p2m->max_mapped_pfn.  I suspect that the meaning
of max_mapped_pfn changed at some point to more match
its name, but this changed the semantics of the hypercall
as used by xc_domain_restore, resulting in this curious
problem.

Any thoughts on how to fix this?

> -----Original Message-----
> From: Annie Li 
> Sent: Tuesday, September 01, 2009 10:27 PM
> To: Keir Fraser
> Cc: Joshua West; James Harper; xen-devel@xxxxxxxxxxxxxxxxxxx
> Subject: Re: [Xen-devel] Error restoring DomU when using GPLPV
> 
> 
> 
> > It seems this problem is connected with gnttab, not shareinfo.
> > I changed some code about grant table in winpv driver (not using 
> > balloon down shinfo+gnttab method), save/restore/migration can work 
> > properly on Xen3.4 now.
> >
> > What i changed is winpv driver use hypercall 
> XENMEM_add_to_physmap to 
> > map corresponding grant tables which devices require, instead of 
> > mapping all 32 pages grant table during initialization.  It seems 
> > those extra grant table mapping cause this problem. 
> 
> Wondering whether those extra grant table mapping is the root 
> cause of 
> the migration problem? or by luck as linux PVHVM too?
> 
> Thanks
> Annie.
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-devel
>

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.