[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] HVM save/restore issue

On Tue, Mar 20, 2007 at 10:01:48AM +0000, Keir Fraser wrote:
> On 20/3/07 08:46, "Zhai, Edwin" <edwin.zhai@xxxxxxxxx> wrote:
> >> Out of interest: why would you do this? I glanced upon the code you are
> >> referring to in xc_hvm_restore.c yesterday, and it struck me as 
> >> particularly
> >> gross. All three PFNs (ioreq, bufioreq, xenstore) could be saved in the
> >> store after building the domain and then saved/restored as part of the
> >> Python-saved data. The situation is easier than for a PV guest because PFNs
> > 
> > save all PFNs directly is good idea. i have this code to keep create and
> > restore 
> > process similar.
> > i'd like directly save/restore all pfns in xc_hvm_{save,restore}. is this 
> > your
> > want?
> Other thoughts on xc_hvm_restore as it is right now, and its use/abuse of
> 'store_mfn' parameter to pass in memory_static_min. I think this can be
> reasonably got rid of:
>  1. Do the setmaxmem hypercall in Python. There's no reason to be doing it
> in xc_hvm_save().

1.xc_linux_save also has setmaxmem
2.even do it in Python, we still need the memsize for setmaxmem

>  2. Instead of preallocating the HVM memory, populate the physmap on demand
> as we do now in xc_linux_restore. I'd do this by having an 'allocated
> bitmap', indexed by guest pfn, where a '1' means that page is already
> populated. Alternatively we might choose to avoid needing the bitmap by
> always doing populate_physmap() whenever we see a pfn, and have Xen
> guarantee that to be a no-op if RAM is already allocated at that pfn.

current hvm restore just try to create the mem layout (same as when 
create)firstly then shape it gradually, so need memsize when creating guest.

seems you want another method: save the mem layout in xc_hvm_save and populate 
the same one when restore, right?
it's okay for me. BTW, i prefer bitmap way if we can make it efficient.

> If we go the bitmap route I'd just make it big enough for a 4GB guest up
> front (only 128kB required) and then realloc() it to be twice as big
> whenever we go off the end of the current bitmap.
>  -- Keir

best rgds,

Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.