[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v3 01/22] mm: introduce xvmalloc() et al and use for grant table allocations


  • To: Jan Beulich <jbeulich@xxxxxxxx>
  • From: Roger Pau Monné <roger.pau@xxxxxxxxxx>
  • Date: Mon, 3 May 2021 16:54:19 +0200
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com; dkim=pass header.d=citrix.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=fTcGgbRpcF+yx991N5BmQYXfWGnNdRb9WDIQjayhgOM=; b=ni11iJHBspRnqf0Uj3OKVEFJWROP3j9nBbpYP5mRJpQwf6e9iwmLXZGLFCdtSXWLTKfco0Lw1DcN4cohX8YJjAmPVlbeTfz4K9VMYfkvjO7p6xy0qbfDUkbytFerefnauN6aaFiR/IQ2yIffet8fxpHBkRY9K8pq2Vw0jLF4xg7kHvWcFQV10pT+ROqx3Q6AgKtfd7BXXF8aq43TgbAxFkQgybGQWrQoKQTmTfnRVhwlzo+eupL+RPZz2fbGY79hcnpWjm5jpgcQ9hSR3d1rRmKIHkBq7d61pz7LCmceW8Er1p1pmPCiKF9p2jTBCZgvT3O8Fz3KbsQgq/FVVhY1+w==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=QL43WIBaAk4Q4A9kQSqU7z2rWfG0CspCfR/yJYw4kOO2kevt0iB8N/6jMVn1FHJsGDt7x28fJkSaE8FF7X1xVYZgcx/6nDtv9XVQO/JQ3saBUrltuszFhgDE3+BEUxhsKp5k8cGM0haMDIithM866H7xP+q4x8SZqqc1dUkcXDvJJ4E0LiNdyNTi6r++x4ePO9xBbmzwNk3jhBBsruY3W0mmD4zMx4G1hQ6PORvDmgorViqPsYW5BOsQ1Z2kx1SXTG+dZVLw/rlMULDxekddHCW0N0U8Waem7U85cdHw5q3SjvsiMywWKA1OmJx1NWSFgLyG5/v6cM5okk+F82gv6w==
  • Authentication-results: esa5.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
  • Cc: "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>, "Andrew Cooper" <andrew.cooper3@xxxxxxxxxx>, George Dunlap <george.dunlap@xxxxxxxxxx>, Ian Jackson <iwj@xxxxxxxxxxxxxx>, Julien Grall <julien@xxxxxxx>, "Stefano Stabellini" <sstabellini@xxxxxxxxxx>, Wei Liu <wl@xxxxxxx>
  • Delivery-date: Mon, 03 May 2021 14:54:39 +0000
  • Ironport-hdrordr: A9a23:kqr5MqNTTI32YsBcT3Pw55DYdL4zR+YMi2QD/3taDTRIb82VkN 2vlvwH1RnyzA0cQm0khMroAse9aFvm39pQ7ZMKNbmvGDPntmyhMZ144eLZrAHIMxbVstRQ3a IIScRDIfXtEFl3itv76gGkE9AmhOKK6rysmP229RdQZCtBApsQiTtRIACdD0FwWU1qBYAhEo Cd+8pAoFObCAkqR+68AWQIWPWGmsbCk4jobQVDKxks7gSPij3A0s+HLzGz2BACXzRThYoz6G StqX2C2oyPkdGejiXd2Wja8ohMlLLaq+drKcSQhqEuW1DRoymyYoAJYczngBkUp6WV5E8ugJ 3wpX4bTrtOwlfwWk3wnhf3wQnn118Vmgzf4HuVm2Hqr8C8ZB9SMbs4uatjfhHU61UtsbhHuc ohtQ/p1Os0fGb9tR/w6NTSWxZhmlDcmwtYrccpg2FCSoxbUbdNrOUkjTNoOa0dFyH34p1PKp gWMOjg4p9tADSnRkGclGxuzNuwZ280DxeLT2MT0/blogR+rTRXyVAVy9cYmWpF3JUhS4Nc7+ CBCahwkqpSJ/VmIp5VNaMke4+aG2bNSRXDPCa7JknmLrgOPzbop4Ts6Ls4yem2cPUzvdUPsa WEdGkdmX85ekroB8HL9oZM6ArxTGK0Wimo4t1C5rBi04eMB4bDAGmmchQDgsGgq/IQDonwQP CoIq9bBPflMC/HBZtJ5QvjQJNfQENuEPE9i5IeYRajs8jLIorluqjwa/DIPofgFj4iRyfRGX 0GcD/vJNhRz0yiV3Pi6SKhHk/FSwjax9ZdAaLa9+8cxMwmLYtXqDUYjly/+4WqJFR5w+gLVX o7BImivrKwpGGw82qNxX5uIABhAkFc56ild3tLoAQNIn7laLprgaTZRUlimF+8YjNvRcLfFw BS435t/7isEpCWzSc+T/WqL3ydlHlWgH6RVZ8Tlumi6K7eC9IFJ6djfJY0ORTAFhRzlwovgn xEchU4SkjWES6rr76kgpwSDOT2bMJ9nw+vHM5RpRvkxAehjPBqYkFecy+lUMaRjwprbSFTnE dN/6gWh6fFpSyiMlIlgOMzMERFbUOeBL4uNnXCWKxk3pTQPC1gR2aDgjKXzzU+YHDj+Ukpim v9FiGMYv3QDl1BundX77by/DpPBxegVnM1Tko/nZx2FGzAtHo26+ONa6ap+0a6a1cJwIgmQX v4SApXBjkr68G81RaTljrHKG4vwY82OPfBSJ45davI53+rIIqUtK0PEvNO5qx5PNT2vuJja5 PHRyalaBfDT8850Q2coXgofBRuoH4/iPXyxVnL6nO70HNXO4uaHH1WA5UgZ/eS4GjvS6zWjN FXjdcpsfCxNWu0QNic0q3TZyNCLBSWgWPedZBelblk+YYJ8J10FN3ndBGN8ldt9hA3Nt31m0 MTW74T2sGLBqZfO+gpPxtE9V8onumVJEQlsgbKEvYzFGtd+0PzDpes2f70srIhDU2KmRvoNX Se+yNb+e3ZXyHr789tN4sAZUBXYlM78nJs4aercJDREhyjc4h4jReHG074VL9WU66eH7oM6j 58/tGThueSMw71whrZszc+AqVA9Q+cMI+PKTPJPe5D6NqhP1uQxoOs/c6olT/yDQKBVH5wv/ wMSWUgKuJZijcji4Ur0i+9DozPy3hV7Wd20HVAjV7i2o+v/WHBO1pJWDep2qlrYQ==
  • Ironport-sdr: miHaRGsVOtgYeVCoTMAD1NxzVzOqaRAgmHtMAeAtTeuiCHR4dHHKvniwif5nNVOLdj8FBMqeBD 6je7cy/HeB2Hb7202V0CkAf9g8TTZqc3LjuNlEb7guaTfD41gl1ohfJ3LSe6KYUCGRn1Oc0C5l dbATIiF7hdxSFYldB2xNY598i0A0FbsEE2Zk4o04HMcb7PP4DRnlrPesJAVR3uFA5O2HmTC7jK rlA5BNlFMEG2r4U/vsHOJ6EHI2C787lB3lzN+N+LAScyvJjNS8yRqO8RnGmMm6CRM6zmAehPh5 Qw0=
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On Mon, May 03, 2021 at 03:50:48PM +0200, Jan Beulich wrote:
> On 03.05.2021 13:31, Roger Pau Monné wrote:
> > On Thu, Apr 22, 2021 at 04:43:39PM +0200, Jan Beulich wrote:
> >> All of the array allocations in grant_table_init() can exceed a page's
> >> worth of memory, which xmalloc()-based interfaces aren't really suitable
> >> for after boot. We also don't need any of these allocations to be
> >> physically contiguous.. Introduce interfaces dynamically switching
> >> between xmalloc() et al and vmalloc() et al, based on requested size,
> >> and use them instead.
> >>
> >> All the wrappers in the new header get cloned mostly verbatim from
> >> xmalloc.h, with the sole adjustment to switch unsigned long to size_t
> >> for sizes and to unsigned int for alignments.
> > 
> > We seem to be growing a non-trivial amount of memory allocation
> > families of functions: xmalloc, vmalloc and now xvmalloc.
> > 
> > I think from a consumer PoV it would make sense to only have two of
> > those: one for allocations that require to be physically contiguous,
> > and one for allocation that don't require it.
> > 
> > Even then, requesting for physically contiguous allocations could be
> > done by passing a flag to the same interface that's used for
> > non-contiguous allocations.
> > 
> > Maybe another option would be to expand the existing
> > v{malloc,realloc,...} set of functions to have your proposed behaviour
> > for xv{malloc,realloc,...}?
> 
> All of this and some of your remarks further down has already been
> discussed. A working group has been formed. No progress since. Yes,
> a smaller set of interfaces may be the way to go. Controlling
> behavior via flags, otoh, is very much not malloc()-like. Making
> existing functions have the intended new behavior is a no-go without
> auditing all present uses, to find those few which actually may need
> physically contiguous allocations.

But you could make your proposed xvmalloc logic the implementation
behind vmalloc, as that would still be perfectly fine and safe? (ie:
existing users of vmalloc already expect non-physically contiguous
memory). You would just optimize the size < PAGE_SIZE for that
interface?

> Having seen similar naming elsewhere, I did propose xnew() /
> xdelete() (plus array and flex-struct counterparts) as the single
> new recommended interface; didn't hear back yet. But we'd switch to
> that gradually, so intermediately there would still be a larger set
> of interfaces.
> 
> I'm not convinced we should continue to have byte-granular allocation
> functions producing physically contiguous memory. I think the page
> allocator should be used directly in such cases.
> 
> >> --- /dev/null
> >> +++ b/xen/include/xen/xvmalloc.h
> >> @@ -0,0 +1,73 @@
> >> +
> >> +#ifndef __XVMALLOC_H__
> >> +#define __XVMALLOC_H__
> >> +
> >> +#include <xen/cache.h>
> >> +#include <xen/types.h>
> >> +
> >> +/*
> >> + * Xen malloc/free-style interface for allocations possibly exceeding a 
> >> page's
> >> + * worth of memory, as long as there's no need to have physically 
> >> contiguous
> >> + * memory allocated.  These should be used in preference to xmalloc() et 
> >> al
> >> + * whenever the size is not known to be constrained to at most a single 
> >> page.
> > 
> > Even when it's know that size <= PAGE_SIZE this helpers are
> > appropriate as they would end up using xmalloc, so I think it's fine to
> > recommend them universally as long as there's no need to alloc
> > physically contiguous memory?
> > 
> > Granted there's a bit more overhead from the logic to decide between
> > using xmalloc or vmalloc &c, but that's IMO not that big of a deal in
> > order to not recommend this interface globally for non-contiguous
> > alloc.
> 
> As long as xmalloc() and vmalloc() are meant stay around as separate
> interfaces, I wouldn't want to "forbid" their use when it's sufficiently
> clear that they would be chosen by the new function anyway. Otoh, if the
> new function became more powerful in terms of falling back to the

What do you mean with more powerful here?

Thanks, Roger.



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.