[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH] memory: arrange to conserve on DMA reservation



On Mon, Jun 16, 2025 at 04:23:40PM +0200, Jan Beulich wrote:
> On 16.06.2025 15:27, Roger Pau Monné wrote:
> > On Tue, Feb 25, 2025 at 03:58:34PM +0100, Jan Beulich wrote:
> >> Entities building domains are expected to deal with higher order
> >> allocation attempts (for populating a new domain) failing. If we set
> >> aside a reservation for DMA, try to avoid taking higher order pages from
> >> that reserve pool.
> >>
> >> Instead favor order-0 ones which often can still be
> >> supplied from higher addressed memory, even if we've run out of
> >> large/huge pages there.
> > 
> > I would maybe write that last sentence as:  force non zero order
> > allocations to use the non-DMA region, and if the region cannot
> > fulfill the request return an error to the caller for it to retry with
> > a smaller order.  Effectively this limits allocations from the DMA
> > region to only be of order 0 during physmap domain population.
> 
> I can take this text, sure.
> 
> >> --- a/xen/common/memory.c
> >> +++ b/xen/common/memory.c
> >> @@ -192,6 +192,14 @@ static void populate_physmap(struct memo
> >>           * delayed.
> >>           */
> >>          a->memflags |= MEMF_no_icache_flush;
> >> +
> >> +        /*
> >> +         * Heuristically assume that during domain construction the 
> >> caller is
> >> +         * capable of falling back to order-0 allocations, allowing us to
> >> +         * conserve on memory otherwise held back for DMA purposes.
> >> +         */
> >> +        if ( a->extent_order )
> >> +            a->memflags |= MEMF_no_dma;
> > 
> > For PV domains: is it possible for toolstack to try to allocate a
> > certain amount of pages from the DMA pool for the benefit of the
> > domain?
> 
> Not directly at least. To benefit the domain, it would also need to be
> told where in PFN space those pages would have ended up.

My question makes no sense anyway if MEMF_no_dma isn't exposed to the
hypercall interface.

> > I also wonder if it would make sense to attempt to implement the
> > logic on the toolstack side: meminit_{hvm,pv}()?
> > 
> > No strong opinion, but slightly less logic in the hypervisor, and
> > won't change the interface for possibly existing toolstacks that don't
> > pass MEMF_no_dma on purpose.
> 
> MEMF_no_dma isn't exposed outside of the hypervisor.

Oh, I see.

One question I have though, on systems with a low amount of memory
(let's say 8GB), does this lead to an increase in domain construction
time due to having to fallback to order 0 allocations when running out
of non-DMA memory?

Thanks, Roger.



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.