[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH 05/12] xen: introduce reserve_heap_pages



Hi,

On 29/04/2020 23:46, Stefano Stabellini wrote:
On Fri, 17 Apr 2020, Jan Beulich wrote:
On 15.04.2020 03:02, Stefano Stabellini wrote:
Introduce a function named reserve_heap_pages (similar to
alloc_heap_pages) that allocates a requested memory range. Call
__alloc_heap_pages for the implementation.

Change __alloc_heap_pages so that the original page doesn't get
modified, giving back unneeded memory top to bottom rather than bottom
to top.

While it may be less of a problem within a zone, doing so is
against our general "return high pages first" policy.

Is this something you'd be OK with anyway?

If not, do you have a suggestion on how to do it better? I couldn't find
a nice way to do it without code duplication, or a big nasty 'if' in the
middle of the function.


@@ -1073,7 +1073,42 @@ static struct page_info *alloc_heap_pages(
          return NULL;
      }
- __alloc_heap_pages(&pg, order, memflags, d);
+    __alloc_heap_pages(pg, order, memflags, d);
+    return pg;
+}
+
+static struct page_info *reserve_heap_pages(struct domain *d,
+                                            paddr_t start,
+                                            unsigned int order,
+                                            unsigned int memflags)
+{
+    nodeid_t node;
+    unsigned int zone;
+    struct page_info *pg;
+
+    if ( unlikely(order > MAX_ORDER) )
+        return NULL;
+
+    spin_lock(&heap_lock);
+
+    /*
+     * Claimed memory is considered unavailable unless the request
+     * is made by a domain with sufficient unclaimed pages.
+     */
+    if ( (outstanding_claims + (1UL << order) > total_avail_pages) &&
+          ((memflags & MEMF_no_refcount) ||
+           !d || d->outstanding_pages < (1UL << order)) )
+    {
+        spin_unlock(&heap_lock);
+        return NULL;
+    }

Where would such a claim come from? Given the purpose I'd assume
the function (as well as reserve_domheap_pages()) can actually be
__init. With that I'd then also be okay with them getting built
unconditionally, i.e. even on x86.

Yes, you are right, I'll make the function __init and also remove this
check on claimed memory.


+    pg = maddr_to_page(start);
+    node = phys_to_nid(start);
+    zone = page_to_zone(pg);
+    page_list_del(pg, &heap(node, zone, order));
+
+    __alloc_heap_pages(pg, order, memflags, d);

I agree with Julien in not seeing how this can be safe / correct.

I haven't seen any issues so far in my testing -- I imagine it is
because there aren't many memory allocations after setup_mm() and before
create_domUs()  (which on ARM is called just before
domain_unpause_by_systemcontroller at the end of start_xen.)

I am not sure why you exclude setup_mm(). Any memory allocated (boot allocator, xenheap) can clash with your regions. The main memory allocations are for the frametable and dom0. I would say you were lucky to not hit them.



I gave a quick look at David's series. Is the idea that I should add a
patch to do the following:

- avoiding adding these ranges to xenheap in setup_mm, wait for later
   (a bit like reserved_mem regions)

I guess by xenheap, you mean domheap? But the problem is not only for domheap, it is also for any memory allocated via the boot allocator. So you need to exclude those regions from any possible allocations.


- in construct_domU, add the range to xenheap and reserve it with 
reserve_heap_pages

I am afraid you can't give the regions to the allocator and then allocate them. The allocator is free to use any page for its own purpose or exclude them.

AFAICT, the allocator doesn't have a list of page in use. It only keeps track of free pages. So we can make the content of struct page_info to look like it was allocated by the allocator.

We would need to be careful when giving a page back to allocator as the page would need to be initialized (see [1]). This may not be a concern for Dom0less as the domain may never be destroyed but will be for correctness PoV.

For LiveUpdate, the original Xen will carve out space to use by the boot allocator in the new Xen. But I think this is not necessary in your context.

It should be sufficient to exclude the page from the boot allocators (as we do for other modules).

One potential issue that can arise is there is no easy way today to differentiate between pages allocated and pages not yet initialized. To make the code robust, we need to prevent a page to be used in two places. So for LiveUpdate we are marking them with a special value, this is used afterwards to check we are effictively using a reserved page.

I hope this helps.

Cheers,

[1] <20200319212150.2651419-2-dwmw2@xxxxxxxxxxxxx>

--
Julien Grall



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.