[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH Remus v2 00/10] Remus support for Migration-v2

On 08/05/15 10:33, Yang Hongyang wrote:
> This patchset implement the Remus support for Migration v2 but without
> memory compressing.
> The series can be found on github:
> https://github.com/macrosheep/xen/tree/Remus-newmig-v2
> PATCH 1-7: Some refactor and prepare work.
> PATCH 8-9: The main Remus loop implement.
> PATCH 10: Fix for Remus.

I have reviewed the other half of the series now, and have some design
to discuss.  (I was hoping to get this email sent in reply to v1, but
never mind).  This largely concerns patch 7 and onwards.

Migration v2 has substantially more structure than legacy did.  Once
issue so far is that your series relies on using more than one END
record, which is not supported in the spec.  (Of course - the spec is
fine to be extended in forward-compatible ways.)

To fix the qemu layering issues I need to have some explicit negotiation
between libxc and libxl about sharing ownership of the input fd.  This
is going to require a new record in the format, and I currently drafting
a patch or two which should help in this regard.

My view for the eventual stream looks something like this (time going

libxc writes:                   libxl writes:

Image Header
Domain Header
Checkpoint record
                                libxl qemu record
                                libxl end-of-checkpoint record
            ctx->save.callbacks->checkpoint() returns
Checkpoint record

This will eventually allow both libxc and libxl to send checkpoint data
(and by the looks of it, remove the need for postcopy()).  With this
libxc/remus work it is fine to use XG_LIBXL_HVM_COMPAT to cover the
current qemu situation, but I would prefer not to be also retrofitting
libxc checkpoint records when doing the libxl/migv2 work.

Does this look plausible in for Remus (and eventually COLO) support?


Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.