[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH v2 0/4] tools/xenstore: add some new features to the documentation
Hi Jan, On 18/07/2022 17:12, Jan Beulich wrote: On 27.05.2022 09:24, Juergen Gross wrote:In the past there have been spotted some shortcomings in the Xenstore interface, which should be repaired. Those are in detail: - Using driver domains for large number of domains needs per domain Xenstore quota [1]. The feedback sent was rather slim (one reply), but it was preferring a new set of wire commands. - XSA-349 [2] has shown that the current definition of watches is not optimal, as it will trigger lots of events when a single one would suffice: for detecting new backend devices the backends in the Linux kernel are registering a watch for e.g. "/local/domain/0/backend" which will fire for ANY sub-node written below this node (on a test machine this added up to 91 watch events for only 3 devices). This can be limited dramatically by extending the XS_WATCH command to take another optional parameter specifying the depth of subdirectories to be considered for sending watch events ("0" would trigger a watch event only if the watched node itself being written). - New features like above being added might make migration of guests between hosts with different Xenstore variants harder, so it should be possible to set the available feature set per domain. For socket connections it should be possible to read the available features. - The special watches @introduceDomain and @releaseDomain are rather cumbersome to use, as they only tell you that SOME domain has been introduced/released. Any consumer of those watches needs to scan all domains on the host in order to find out the domid, causing significant pressure on the dominfo hypercall (imagine a system with 1000 domains running and one domain dying - there will be more than 1000 watch events triggered and 1000 xl daemons will try to find out whether "their" domain has died). Those watches should be enhanced to optionally be specific to a single domain and to let the event carry the related domid. As some of those extensions will need to be considered in the Xenstore migration stream, they should be defined in one go (in fact the 4th one wouldn't need that, but it can easily be connected to the 2nd one). As such extensions need to be flagged in the "features" in the ring page anyway, it is fine to implement them independently. Add the documentation of the new commands/features. [1]: https://lists.xen.org/archives/html/xen-devel/2020-06/msg00291.html [2]: http://xenbits.xen.org/xsa/advisory-349.html Changes in V2: - added new patch 1 - remove feature bits for dom0-only features - get-features without domid returns Xenstore supported features - get/set-quota without domid for global quota access Juergen Gross (4): tools/xenstore: modify feature bit specification in xenstore-ring.txt tools/xenstore: add documentation for new set/get-feature commands tools/xenstore: add documentation for new set/get-quota commands tools/xenstore: add documentation for extended watch commandHmm, looks like I did commit v1 of this series, not noticing the v2 _and_ seeing there had been R-b with no other follow-up (leaving aside the v2) in a long time. Please advise if I should revert the commits. I'm sorry for the confusion. (I also wonder why the R-b weren't carried over to v2.) patch #1 is a new patch. The patch #2, #3, #4 have been reworded and the overall interaction is different. So I don't think the reviewed-by should have been carried. I had some concerns in v1 which were addressed in v2. I have reviewed v2 a while ago. From my perspective, patch #1, #3, #4 are ready to go. Patch #2 needs a respin and we also need to clarify the integration with migration/live-update. As you committed, I would be OK if this is addressed in a follow-up series. But this *must* be addressed by the time 4.17 is released because otherwise we will commit ourself to a broken interface. @Henry, please add this in the blocker list. Cheers, -- Julien Grall
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |