[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [XEN PATCH for-4.13 v2] x86/domctl: have XEN_DOMCTL_getpageframeinfo3 preemptible
On 25.11.2019 18:37, Anthony PERARD wrote: > On Mon, Nov 25, 2019 at 05:22:19PM +0100, Jan Beulich wrote: >> On 25.11.2019 15:59, Anthony PERARD wrote: >>> This hypercall can take a long time to finish because it attempts to >>> grab the `hostp2m' lock up to 1024 times. The accumulated wait for the >>> lock can take several seconds. >>> >>> This can easily happen with a guest with 32 vcpus and plenty of RAM, >>> during localhost migration. >>> >>> Signed-off-by: Anthony PERARD <anthony.perard@xxxxxxxxxx> >> >> As indicated on v1 already, this being a workaround rather than a fix >> should be stated clearly in the description. Especially if more such >> operations turn up, it'll become increasingly obvious that the root >> of the problem will need dealing with rather than papering over some >> of the symptoms. With this taken care of I'd be (still hesitantly) >> willing to give my ack for this as a short term "solution". > > Sorry to have lead you to believe that the patch was *the* solution to > the problem described. I don't think the patch itself is a workaround or > a fix, it is simply an improvement to the hypercall. That improvement > could be used to remove the limit on `num' (something that I've read on > xen-devel as a possible improvement). Hmm, yes, this is a good point. I wonder why you don't drop the limit then right away, at least for translated guests. This would then make clear that ... > Would it be enough to add this following paragraph to the commit description? > > While the patch doesn't fix the problem with the lock contention and > the fact that the `hostp2m' lock is currently global (and not on a > single page), it is still an improvement to the hypercall. > > > I don't like the terms "workaround" or "short term solution" as a > description for this patch. Both implies that the patch could be > reverted once the root issue is taking care of. ... indeed the patch isn't a candidate for reverting down the road (which so far I did in fact imply). Of course if Jürgen indicated that he'd be willing to accept the patch in its current form, but not in its possible extended one, then - making the description state this planned improvement _and_ there being a promise to actually follow up for 4.14 - I'd be okay with the code change remaining as it is. Then again - dropping the (arbitrary) limit on the number of entries isn't going to be really helpful when the hypercall even with this limit in place may already take several seconds, as you say. I'd agree though that the change still is a long term improvement. So I would probably indeed leave the code change as is, but amend your suggested addition to the description by pointing out the possibility of dropping the arbitrary limit. >>> diff --git a/xen/include/public/domctl.h b/xen/include/public/domctl.h >>> index a03e80e5984a..1b69eb75cb20 100644 >>> --- a/xen/include/public/domctl.h >>> +++ b/xen/include/public/domctl.h >>> @@ -163,6 +163,10 @@ DEFINE_XEN_GUEST_HANDLE(xen_domctl_getdomaininfo_t); >>> #define XEN_DOMCTL_PFINFO_LTAB_MASK (0xfU<<28) >>> >>> /* XEN_DOMCTL_getpageframeinfo3 */ >>> +/* >>> + * Both value `num' and `array' are modified by the hypercall to allow >>> + * preemption. >> >> s/are/may be/ ? > > I don't think the distinction is necessary. How would that be useful to > know that both values may not be modified? I though the goal of the > added description was to warn against reusing the values after calling > the hypercall. If you write "are", you're saying that it _will_ be modified, i.e. a caller may (even if just for some sanity checking) verify that the fields indeed did change. I think wording in the public headers in particular should precisely represent all possible behaviors. Jan _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxxx https://lists.xenproject.org/mailman/listinfo/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |