[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v12 20/26] Support colo mode for qemu disk

Wen Congyang writes ("Re: [PATCH v11 20/27] Support colo mode for qemu disk"):
> On 03/18/2016 01:18 AM, Ian Jackson wrote:
> > If so, what software, where, arranges for the management of the
> > different qcow2 `layers' ?  Ie, what creates the layers; what resynchs
> > them, etc. ?
> active disk and hidden disk are seperate disk. The management application
> can create an empty qcow disk before running COLO. These two disks are
> empty disk, and have the same size with the secondary disk.

It is a shame that this management code is not also here.

We would like to have enough management code in xen.git that we can
introduce a COLO test in osstest.  That will ensure that your feature
does not regress.

> > Would it be possible for these disk names to have formulaic,
> > predicatable, names, so that they wouldn't need to be specified
> > separately ?

Unfortunately, AFAICT, I have not had an answer to this question.  I
think COLO is too important a feature to block because of these
concerns.  It is probably too late for 4.7 to address this fully.

However I don't want to declare this API stable and fully supported,
as it is.

So can you please add comments to this patch saying

  Note that the COLO configuration settings should be considered
  unstable.  They may change incompatibly in future versions of

This should appear (at least) three times: in xl.pod.1,
xl-disk-configuration.txt and libxl_types.idl.  With such a change, I
will ack this patch.

In practice this means that it will not be feasible to support COLO
via libvirt or ganeti, for example.  But I think given your comments
above that there are pieces missing from what is going into xen.git,

I hope we can continue this conversation after Xen 4.7 is released, to
try to stabilise this API and maybe move some of the setup code into


Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.