[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [PATCH DO NOT APPLY] docs: Document allocator properties and the rubric for using them
Document the properties of the various allocators and lay out a clear rubric for when to use each. Signed-off-by: George Dunlap <george.dunlap@xxxxxxxxxx> --- This doc is my understanding of the properties of the current allocators (alloc_xenheap_pages, xmalloc, and vmalloc), and of Jan's proposed new wrapper, xvmalloc. xmalloc, vmalloc, and xvmalloc were designed more or less to mirror similar functions in Linux (kmalloc, vmalloc, and kvmalloc respectively). CC: Andrew Cooper <andrew.cooper3@xxxxxxxxxx> CC: Jan Beulich <jbeulich@xxxxxxxx> CC: Roger Pau Monne <roger.pau@xxxxxxxxxx> CC: Stefano Stabellini <sstabellini@xxxxxxxxxx> CC: Julien Grall <julien@xxxxxxx> --- .../memory-allocation-functions.rst | 118 ++++++++++++++++++ 1 file changed, 118 insertions(+) create mode 100644 docs/hypervisor-guide/memory-allocation-functions.rst diff --git a/docs/hypervisor-guide/memory-allocation-functions.rst b/docs/hypervisor-guide/memory-allocation-functions.rst new file mode 100644 index 0000000000..15aa2a1a65 --- /dev/null +++ b/docs/hypervisor-guide/memory-allocation-functions.rst @@ -0,0 +1,118 @@ +.. SPDX-License-Identifier: CC-BY-4.0 + +Xenheap memory allocation functions +=================================== + +In general Xen contains two pools (or "heaps") of memory: the *xen +heap* and the *dom heap*. Please see the comment at the top of +``xen/common/page_alloc.c`` for the canonical explanation. + +This document describes the various functions available to allocate +memory from the xen heap: their properties and rules for when they should be +used. + + +TLDR guidelines +--------------- + +* By default, ``xvmalloc`` (or its helper cognates) should be used + unless you know you have specific properties that need to be met. + +* If you need memory which needs to be physically contiguous, and may + be larger than ``PAGE_SIZE``... + + - ...and is order 2, use ``alloc_xenheap_pages``. + + - ...and is not order 2, use ``xmalloc`` (or its helper cognates).. + +* If you don't need memory to be physically contiguous, and know the + allocation will always be larger than ``PAGE_SIZE``, you may use + ``vmalloc`` (or one of its helper cognates). + +* If you know that allocation will always be less than ``PAGE_SIZE``, + you may use ``xmalloc``. + +Properties of various allocation functions +------------------------------------------ + +Ultimately, the underlying allocator for all of these functions is +``alloc_xenheap_pages``. They differ on several different properties: + +1. What underlying allocation sizes are. This in turn has an effect + on: + + - How much memory is wasted when requested size doesn't match + + - How such allocations are affected by memory fragmentation + + - How such allocations affect memory fragmentation + +2. Whether the underlying pages are physically contiguous + +3. Whether allocation and deallocation require the cost of mapping and + unmapping + +``alloc_xenheap_pages`` will allocate a physically contiguous set of +pages on orders of 2. No mapping or unmapping is done. However, if +this is used for sizes not close to ``PAGE_SIZE * (1 << n)``, a lot of +space will be wasted. Such allocations may fail if the memory becomes +very fragmented; but such allocations do not tend to contribute to +that memory fragmentation much. + +As such, ``alloc_xenheap_pages`` should be used when you need a size +of exactly ``PAGE_SIZE * (1 << n)`` physically contiguous pages. + +``xmalloc`` is actually two separate allocators. Allocations of < +``PAGE_SIZE`` are handled using ``xmem_pool_alloc()``, and allocations >= +``PAGE_SIZE`` are handled using ``xmalloc_whole_pages()``. + +``xmem_pool_alloc()`` is a pool allocator which allocates xenheap +pages on demand as needed. This is ideal for small, quick +allocations: no pages are mapped or unmapped; sub-page allocations are +expected, and so a minimum of space is wasted; and because xenheap +pages are allocated one-by-one, 1) they are unlikely to fail unless +Xen is genuinely out of memory, and 2) it doesn't have a major effect +on fragmentation of memory. + +Allocations of > ``PAGE_SIZE`` are not possible with the pool +allocator, so for such sizes, ``xmalloc`` calls +``xmalloc_whole_pages()``, which in turn calls ``alloc_xenheap_pages`` +with an order large enough to satisfy the request, and then frees all +the pages which aren't used. + +Like the other allocator, this incurs no mapping or unmapping +overhead. Allocations will be physically contiguous (like +``alloc_xenheap_pages``), but not as much is wasted as a plain +``alloc_xenheap_pages`` allocation. However, such an allocation may +fail if memory fragmented to the point that a contiguous allocation of +the appropriate size cannot be found; such allocations also tend to +fragment memory more. + +As such, ``xmalloc`` may be called in cases where you know the +allocation will be less than ``PAGE_SIZE``; or when you need a +physically contiguous allocation which may be more than +``PAGE_SIZE``. + +``vmalloc`` will allocate pages one-by-one and map them into a virtual +memory area designated for the purpose, separated by a guard page. +Only full pages are allocated, so using it from less tham +``PAGE_SIZE`` allocations is wasteful. The underlying memory will not +be physically contiguous. As such, it is not adversely affected by +excessive system fragmentation, nor does it contribute to it. +However, allocating and freeing requires a map and unmap operation +respectively, both of which adversely affect system performance. + +Therefore, ``vmalloc`` should be used for page allocations over a page +size in length which don't need to be physically contiguous. + +``xvmalloc`` is like ``xmalloc``, except that for allocations > +``PAGE_SIZE``, it calls ``vmalloc`` instead. Thus ``xvmalloc`` should +always be preferred unless: + +1. You need physically contiguous memory, and your size may end up + greater than ``PAGE_SIZE``; in which case you should use + ``xmalloc`` or ``alloc_xenheap_pages`` as appropriate + +2. You are positive that ``xvmalloc`` will choose one specific + underlying implementation; in which case you should simply call + that implementation directly. -- 2.30.0
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |