[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] making xenstore domain easy configurable
On 28/06/16 16:17, Doug Goldstein wrote: > On 6/28/16 8:59 AM, Andrew Cooper wrote: >> On 28/06/16 14:36, Juergen Gross wrote: >>> On 28/06/16 14:42, Andrew Cooper wrote: >>>> On 28/06/16 12:56, Juergen Gross wrote: >>>>> On 28/06/16 13:03, Ian Jackson wrote: >>>>>> Juergen Gross writes ("Re: [Xen-devel] making xenstore domain easy >>>>>> configurable"): >>>>>>> So you are telling me the xenstore domain won't work for this case? >>>>>> Yes. >>>>> That's rather unfortunate. So in order to be able to make xenstore >>>>> domain a common setup we need to find a solution for support of >>>>> xs_restrict() via xenbus, right? >>>>> >>>>> TBH, the way xs_restrict() was introduced is rather weird. It is >>>>> completely bound to the socket interface of oxenstored. So anyone >>>>> wanting to use xs_restrict() is limited to oxenstored running in >>>>> dom0. No way to use xenstored or a xenstore domain. I'm really >>>>> disappointed such a design was accepted and is now the reason for >>>>> not being able to disaggregate dom0. >>>>> >>>>> I've searched through the xen-devel archives and found a very >>>>> interesting mail: >>>>> >>>>> http://lists.xen.org/archives/html/xen-devel/2010-04/msg01318.html >>>>> >>>>> The "restrict" feature was added without any further discussion how >>>>> it is implemented and that the C-variant doesn't support it. The >>>>> explicit question about non-existing features in the C xenstored was >>>>> answered just with "the xenstore wire protocol doesn't change". >>>>> >>>>> With: >>>>> >>>>> http://lists.xen.org/archives/html/xen-devel/2010-07/msg00091.html >>>>> >>>>> the XS_RESTRICT value in xs_wire.h (aah, suddenly it was changed?) >>>>> was added. Again no mentioning of the special implementation in >>>>> oxenstored. >>>>> >>>>> Really, this is not how open source development should be done! >>>>> Maybe I'm just upset now, but I'm in favor of dropping xs_restrict() >>>>> support as it has been introduced in a foul way. >>>> I don't think the lack of xs_restrict() working over the ring should >>>> preclude these improvements to the configuration of how xenstored starts >>>> up. >>> It is limiting the solution by not allowing me to drop the sockets >>> completely. >> I don't think dropping the sockets completely is a sensible course of >> action. I had come the conclusion that you were just not going to use >> them, as opposed to removing them entirely. >> >> For xenstored running in the same domain as the toolstack, sockets are >> less overhead than the shared memory ring, as no hypercalls are >> involved. There is also the unfortunate problem that one of the two >> linux devices for xenstored *still* causes deadlocks when used; a >> problem which is unresolved from Linux 3.14. > Since Xen 4.7 the broken devices won't be used by default. I understand > that the socket interface is faster and less overhead but I guess my > question is how much data is sent over this interface? Does it really > matter the small performance difference to justify having two different > methods and not trimming the code base down to one method? My gut feeling is that the XenServer bootstorm scalability tests will notice. Our metric of "how fast is it to boot 1000 VMs" is very important for VDI, as employees tend to get to work and try to log during the same short time period. ~Andrew _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |