[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v11 00/27] COarse-grain LOck-stepping Virtual Machines for Non-stop Service
Changlong Xie writes ("[PATCH v11 00/27] COarse-grain LOck-stepping Virtual Machines for Non-stop Service"): > This patchset implemented the COLO feature for Xen. > For detail/install/use of COLO feature, refer to: > http://wiki.xen.org/wiki/COLO_-_Coarse_Grain_Lock_Stepping Thanks for this series. I've now gone and at least looked at each of these patches. I think this is an important feature which I want to see in Xen 4.7. Most of the patches seem to do roughly sensible things in roughly sensible ways. I have reviewed some of the areas where they touch common code, and some of the API and protocol design. (I don't think it very valuable to review the implementation in detail.) Overall I think most of this is in good shape and on track. But as you see from my mails I have some serious questions about the disk checkpointing/plumbing architecture. I have reservations about the use of qemu for this. I think this would perhaps be better done as a devmapper module. But I don't feel I understand it well enough yet. I have read the docs which have been provided. The QEMU implementation doc was helpful but I still feel confused. I think I should go away and read it again more closely. In the meantime, answers to my questions would be helpful. If after discussion and further thought I do still think that doing this in qemu is the wrong place, that doesn't mean that we need to block this series in the hope of it being rearchitected. It just means that I want to make sure that the _interface_ to libxl, as seen from the outside and especially as seen from the user's point of view, does not preclude future design changes. In particular, what I care about in this context is that the libxl API (and the xl config syntax) 1. doesn't preclude an implementation of the same functionality elsewhere, and 2. doesn't preclude COLO for PV guests (or hvm-lite-ng guests). Thanks for your attention. I'm afraid I'm going to be away out of the office for all of next week. So I will pick this up again when I get back. Regards, Ian. _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |