[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] save image file format? and [RFC] tmem save/restore/migrate



At 00:00 +0100 on 18 Jun (1245283230), Dan Magenheimer wrote:
> Is there any documentation on the format of the file/stream
> used for save/restore/migrate?  Or is it just a sequence of
> chunks of bytes pre-defined only by the ordering of the
> save/restore code?
> 
> I found this 2006 xen-devel thread, but it doesn't look like
> it (other than hvm additions) ever happened?  E.g. still
> no versioning, self-identification, extensibility?

It's still the ad-hoc stream-of-data format, I'm afraid.  The whole
format needs a good kicking.  It's not even portable between 32-bit and
64-bit tools.  Gianluca (Cc'd) is just starting to look at the
save/restore code in the hopes of making it more sane, so now would be a
good time to bring up any suggestions.

The usual way of adding new fields is to grab another negative number in
the length-of-the-next-block-of-frames field.

BTW, the HVM save records supplied by Xen to libxc _are_
self-identifying and extensible (and there's room in the header for a
version number, though by sticking to the principle of only transferring
architectural state we've avoided the need to use it so far).  But
they're just dropped into the stream after the memory pages and before
xend glues on the qemu record.

Cheers, 

Tim.

> http://lists.xensource.com/archives/html/xen-devel/2006-09/msg00440.html
> 
> I'm starting to look at save/restore/migrate for tmem and will
> need to communicate the following information via file/stream:
> 
> - pool id and characteristics (a small number of bytes of data),
>   for some small number of pools
> - for some classes of pools, some number of "pages" of data,
>   each page consisting of a "handle" (128 bits) and PAGE_SIZE
>   bytes of data associated with that handle
> - for some of these pages of data, a handle+invalidate (see below)
> - (optional) in some cases the pages will be pre-compressed;
>   each can be decompressed on the source side and recompressed
>   on the destination side, but this seems a sad waste of
>   cpu cycles (though necessary if the compression algorithm
>   were different on source and destination); if possible,
>   save/transmit non-decompressed data
> 
> Note that for the pages of data, dirtying during migration
> is not possible, however invalidation IS possible.  E.g.
> unlike normally addressable pages which may be transmitted
> multiple times during a live migration, a transmitted
> tmem page (handle and data) will be transmitted only once,
> but may be followed at some point with a handle+invalidate.
> 
> The ordering of tmem info/pages vs current saved info/data
> is flexible, but the number of pages could be very large,
> so for live migrate, transmission should NOT be postponed
> until the "final pass" of normal page transmission (e.g.
> after the domain has been paused on the source machine).
> 
> I also need code to verify that the destination has tmem
> support and it is enabled.  Only PV domains can use tmem
> so no HVM changes should be necessary.
> 
> Any pointers or suggestions welcome, especially any thoughts
> on changes that might be required above libxc such as in
> python code or (heaven forbid) ioemu/qemu.
> 
> Thanks for any help!
> Dan
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-devel

-- 
Tim Deegan <Tim.Deegan@xxxxxxxxxx>
Principal Software Engineer, Citrix Systems (R&D) Ltd.
[Company #02300071, SL9 0DZ, UK.]

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.