[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH DO NOT APPLY] docs: Document allocator properties and the rubric for using them


  • To: "open list:X86" <xen-devel@xxxxxxxxxxxxxxxxxxxx>
  • From: George Dunlap <George.Dunlap@xxxxxxxxxx>
  • Date: Tue, 16 Feb 2021 10:58:18 +0000
  • Accept-language: en-US
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com; dkim=pass header.d=citrix.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=jjyGTJwvkLj5uA8DyYObNDRsgh7IhPrL2gMWZLzlAUM=; b=LM19gqSnFJVFQ2rEkPf2iRMKAS0qkNGcnbRuNEjXm56yIhNXakcnXw+izMIdnRR31/PMrqandkbHZ4bVIBX3gg1S6i5drO7CnER8hPaeXR4aeHTzAW/7sj6qm4s4e9uV6tTtv+0omLyr8vmwPU2TqZfXuUnp40Bt2SpSu4ws0leEpCQnrj6R/9g8xMEt3/sc061fY/ITGQReo8DodLL+2CCqMtn0o+n7w+8Z0O9lA77VSJ1qjRjHnAL9F8aimE0Fmx27x7Ijm5Cee0Li68cGzhY4BeizFJWpCircU2QClF1RUZQUPMiMPt95GAnHAc6baKTRNqpYerHOcBANy7wjzg==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=m1+kmodrfPEpXZa6Pbtthv9La0zCR2CDdaa42Aca80vQrI8nir4RPH1cxnXetZ4Cl/X5WKSYV0YiYjgxP0f+/BHKpM4w1bKnG+jzJ69KCPhIMqm76rMJzMuBCBDanipa/ECjEgNM5Luek9Uj2FFl6Hx+tmKD0O7cbyCtvCUbCAuwfHkfFY+X0s5I0LwA9c+gifxSJc6luNusuhUyDvtRFTnMsTW3xHB0GvcgUQlICKyT/VesIDTgTH9IoFIM2kep8iPWQ8gVlOJvlnKbu2s31o4mcc41KGrNL+OuPWadu/QcS5ZMs14Lj0ILQxscm6M3XAvZEwB7IU5KiRITvpKI2A==
  • Authentication-results: esa2.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
  • Delivery-date: Tue, 16 Feb 2021 10:58:27 +0000
  • Ironport-sdr: CCG9AWwCfmEca/DAkqnGqUku2P3D+SjRuqHO2+eAuPVZp+OeBO3RofcU1TWB2G2J0kHT1j9pN0 4xYG8y1PGtKKP9/So+L08W61GvJzrg40eb1GHoCdAj0GotvbEtEoKfUUnCs8ZeBCWGpMgcAcoz 93+KnBdgUK7/VJmFysAh9FQQo8uiSEs7DAOeHvD9P716zCvuKes/aqIJNcKjARxi4a6QHMvWFR W1SbwHzmBbg8h3D49AM07LAn3tE178fFgi54Gx1tPqymO3HUkEqTbkLgryFuZKrIbKxJxyDcWf 5d8=
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>
  • Thread-index: AQHXBE6IzoDpW0Psi0SLnVWLvJvx9KpanKQA
  • Thread-topic: [PATCH DO NOT APPLY] docs: Document allocator properties and the rubric for using them


> On Feb 16, 2021, at 10:28 AM, George Dunlap <george.dunlap@xxxxxxxxxx> wrote:
> 
> Document the properties of the various allocators and lay out a clear
> rubric for when to use each.
> 
> Signed-off-by: George Dunlap <george.dunlap@xxxxxxxxxx>
> ---
> 
> This doc is my understanding of the properties of the current
> allocators (alloc_xenheap_pages, xmalloc, and vmalloc), and of Jan's
> proposed new wrapper, xvmalloc.
> 
> xmalloc, vmalloc, and xvmalloc were designed more or less to mirror
> similar functions in Linux (kmalloc, vmalloc, and kvmalloc
> respectively).
> 
> CC: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
> CC: Jan Beulich <jbeulich@xxxxxxxx>
> CC: Roger Pau Monne <roger.pau@xxxxxxxxxx>
> CC: Stefano Stabellini <sstabellini@xxxxxxxxxx>
> CC: Julien Grall <julien@xxxxxxx>
> ---
> .../memory-allocation-functions.rst           | 118 ++++++++++++++++++
> 1 file changed, 118 insertions(+)
> create mode 100644 docs/hypervisor-guide/memory-allocation-functions.rst
> 
> diff --git a/docs/hypervisor-guide/memory-allocation-functions.rst 
> b/docs/hypervisor-guide/memory-allocation-functions.rst
> new file mode 100644
> index 0000000000..15aa2a1a65
> --- /dev/null
> +++ b/docs/hypervisor-guide/memory-allocation-functions.rst
> @@ -0,0 +1,118 @@
> +.. SPDX-License-Identifier: CC-BY-4.0
> +
> +Xenheap memory allocation functions
> +===================================
> +
> +In general Xen contains two pools (or "heaps") of memory: the *xen
> +heap* and the *dom heap*.  Please see the comment at the top of
> +``xen/common/page_alloc.c`` for the canonical explanation.
> +
> +This document describes the various functions available to allocate
> +memory from the xen heap: their properties and rules for when they should be
> +used.
> +
> +
> +TLDR guidelines
> +---------------
> +
> +* By default, ``xvmalloc`` (or its helper cognates) should be used
> +  unless you know you have specific properties that need to be met.
> +
> +* If you need memory which needs to be physically contiguous, and may
> +  be larger than ``PAGE_SIZE``...
> +  
> +  - ...and is order 2, use ``alloc_xenheap_pages``.
> +    
> +  - ...and is not order 2, use ``xmalloc`` (or its helper cognates)..
> +
> +* If you don't need memory to be physically contiguous, and know the
> +  allocation will always be larger than ``PAGE_SIZE``, you may use
> +  ``vmalloc`` (or one of its helper cognates).
> +
> +* If you know that allocation will always be less than ``PAGE_SIZE``,
> +  you may use ``xmalloc``.
> +
> +Properties of various allocation functions
> +------------------------------------------
> +
> +Ultimately, the underlying allocator for all of these functions is
> +``alloc_xenheap_pages``.  They differ on several different properties:
> +
> +1. What underlying allocation sizes are.  This in turn has an effect
> +   on:
> +
> +   - How much memory is wasted when requested size doesn't match
> +
> +   - How such allocations are affected by memory fragmentation
> +
> +   - How such allocations affect memory fragmentation
> +
> +2. Whether the underlying pages are physically contiguous
> +
> +3. Whether allocation and deallocation require the cost of mapping and
> +   unmapping
> +
> +``alloc_xenheap_pages`` will allocate a physically contiguous set of
> +pages on orders of 2.  No mapping or unmapping is done.  However, if
> +this is used for sizes not close to ``PAGE_SIZE * (1 << n)``, a lot of
> +space will be wasted.  Such allocations may fail if the memory becomes
> +very fragmented; but such allocations do not tend to contribute to
> +that memory fragmentation much.
> +
> +As such, ``alloc_xenheap_pages`` should be used when you need a size
> +of exactly ``PAGE_SIZE * (1 << n)`` physically contiguous pages.
> +
> +``xmalloc`` is actually two separate allocators.  Allocations of <
> +``PAGE_SIZE`` are handled using ``xmem_pool_alloc()``, and allocations >=
> +``PAGE_SIZE`` are handled using ``xmalloc_whole_pages()``.
> +
> +``xmem_pool_alloc()`` is a pool allocator which allocates xenheap
> +pages on demand as needed.  This is ideal for small, quick
> +allocations: no pages are mapped or unmapped; sub-page allocations are
> +expected, and so a minimum of space is wasted; and because xenheap
> +pages are allocated one-by-one, 1) they are unlikely to fail unless
> +Xen is genuinely out of memory, and 2) it doesn't have a major effect
> +on fragmentation of memory.
> +
> +Allocations of > ``PAGE_SIZE`` are not possible with the pool
> +allocator, so for such sizes, ``xmalloc`` calls
> +``xmalloc_whole_pages()``, which in turn calls ``alloc_xenheap_pages``
> +with an order large enough to satisfy the request, and then frees all
> +the pages which aren't used.
> +
> +Like the other allocator, this incurs no mapping or unmapping
> +overhead.  Allocations will be physically contiguous (like
> +``alloc_xenheap_pages``), but not as much is wasted as a plain
> +``alloc_xenheap_pages`` allocation.  However, such an allocation may
> +fail if memory fragmented to the point that a contiguous allocation of
> +the appropriate size cannot be found; such allocations also tend to
> +fragment memory more.
> +
> +As such, ``xmalloc`` may be called in cases where you know the
> +allocation will be less than ``PAGE_SIZE``; or when you need a
> +physically contiguous allocation which may be more than
> +``PAGE_SIZE``.  
> +
> +``vmalloc`` will allocate pages one-by-one and map them into a virtual
> +memory area designated for the purpose, separated by a guard page.
> +Only full pages are allocated, so using it from less tham
> +``PAGE_SIZE`` allocations is wasteful.  The underlying memory will not
> +be physically contiguous.  As such, it is not adversely affected by
> +excessive system fragmentation, nor does it contribute to it.
> +However, allocating and freeing requires a map and unmap operation
> +respectively, both of which adversely affect system performance.
> +
> +Therefore, ``vmalloc`` should be used for page allocations over a page
> +size in length which don't need to be physically contiguous.
> +
> +``xvmalloc`` is like ``xmalloc``, except that for allocations >
> +``PAGE_SIZE``, it calls ``vmalloc`` instead.  Thus ``xvmalloc`` should
> +always be preferred unless:
> +
> +1. You need physically contiguous memory, and your size may end up
> +   greater than ``PAGE_SIZE``; in which case you should use
> +   ``xmalloc`` or ``alloc_xenheap_pages`` as appropriate
> +
> +2. You are positive that ``xvmalloc`` will choose one specific
> +   underlying implementation; in which case you should simply call
> +   that implementation directly.

Basically, the more I look at this whole thing — particularly the fact that 
xmalloc already has an `if ( size > PAGE_SIZE)` inside of it — the more I think 
this last point is just a waste of everyone’s time.

I’m inclined to go with Julien’s suggestion, that we use xmalloc when we need 
physically contiguous memory (with a comment), and xvmalloc everywhere else.  
We can implement xvmalloc such that it’s no slower than xmalloc is currently 
(i.e., it directly calls `xmem_pool_alloc` when size < PAGE_SIZE, rather than 
calling xmalloc and having xmalloc do the comparison again).

 -George

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.