[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH] memory: arrange to conserve on DMA reservation



On Tue, Feb 25, 2025 at 03:58:34PM +0100, Jan Beulich wrote:
> Entities building domains are expected to deal with higher order
> allocation attempts (for populating a new domain) failing. If we set
> aside a reservation for DMA, try to avoid taking higher order pages from
> that reserve pool.
>
> Instead favor order-0 ones which often can still be
> supplied from higher addressed memory, even if we've run out of
> large/huge pages there.

I would maybe write that last sentence as:  force non zero order
allocations to use the non-DMA region, and if the region cannot
fulfill the request return an error to the caller for it to retry with
a smaller order.  Effectively this limits allocations from the DMA
region to only be of order 0 during physmap domain population.

> Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
> 
> ---
> RFC: More generally for any requests targeting remote domains?

I think doing the limitation for domain creation is fine, the more
that there are also other flags there.

> --- a/xen/common/memory.c
> +++ b/xen/common/memory.c
> @@ -192,6 +192,14 @@ static void populate_physmap(struct memo
>           * delayed.
>           */
>          a->memflags |= MEMF_no_icache_flush;
> +
> +        /*
> +         * Heuristically assume that during domain construction the caller is
> +         * capable of falling back to order-0 allocations, allowing us to
> +         * conserve on memory otherwise held back for DMA purposes.
> +         */
> +        if ( a->extent_order )
> +            a->memflags |= MEMF_no_dma;

For PV domains: is it possible for toolstack to try to allocate a
certain amount of pages from the DMA pool for the benefit of the
domain?

I also wonder if it would make sense to attempt to implement the
logic on the toolstack side: meminit_{hvm,pv}()?

No strong opinion, but slightly less logic in the hypervisor, and
won't change the interface for possibly existing toolstacks that don't
pass MEMF_no_dma on purpose.

Thanks, Roger.



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.