[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH] memory: arrange to conserve on DMA reservation


  • To: Roger Pau Monné <roger.pau@xxxxxxxxxx>
  • From: Jan Beulich <jbeulich@xxxxxxxx>
  • Date: Mon, 16 Jun 2025 16:23:40 +0200
  • Autocrypt: addr=jbeulich@xxxxxxxx; keydata= xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A nAuWpQkjM1ASeQwSHEeAWPgskBQL
  • Cc: "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>, Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, Julien Grall <julien@xxxxxxx>, Stefano Stabellini <sstabellini@xxxxxxxxxx>, Anthony PERARD <anthony.perard@xxxxxxxxxx>, Michal Orzel <michal.orzel@xxxxxxx>
  • Delivery-date: Mon, 16 Jun 2025 14:23:54 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On 16.06.2025 15:27, Roger Pau Monné wrote:
> On Tue, Feb 25, 2025 at 03:58:34PM +0100, Jan Beulich wrote:
>> Entities building domains are expected to deal with higher order
>> allocation attempts (for populating a new domain) failing. If we set
>> aside a reservation for DMA, try to avoid taking higher order pages from
>> that reserve pool.
>>
>> Instead favor order-0 ones which often can still be
>> supplied from higher addressed memory, even if we've run out of
>> large/huge pages there.
> 
> I would maybe write that last sentence as:  force non zero order
> allocations to use the non-DMA region, and if the region cannot
> fulfill the request return an error to the caller for it to retry with
> a smaller order.  Effectively this limits allocations from the DMA
> region to only be of order 0 during physmap domain population.

I can take this text, sure.

>> --- a/xen/common/memory.c
>> +++ b/xen/common/memory.c
>> @@ -192,6 +192,14 @@ static void populate_physmap(struct memo
>>           * delayed.
>>           */
>>          a->memflags |= MEMF_no_icache_flush;
>> +
>> +        /*
>> +         * Heuristically assume that during domain construction the caller 
>> is
>> +         * capable of falling back to order-0 allocations, allowing us to
>> +         * conserve on memory otherwise held back for DMA purposes.
>> +         */
>> +        if ( a->extent_order )
>> +            a->memflags |= MEMF_no_dma;
> 
> For PV domains: is it possible for toolstack to try to allocate a
> certain amount of pages from the DMA pool for the benefit of the
> domain?

Not directly at least. To benefit the domain, it would also need to be
told where in PFN space those pages would have ended up.

> I also wonder if it would make sense to attempt to implement the
> logic on the toolstack side: meminit_{hvm,pv}()?
> 
> No strong opinion, but slightly less logic in the hypervisor, and
> won't change the interface for possibly existing toolstacks that don't
> pass MEMF_no_dma on purpose.

MEMF_no_dma isn't exposed outside of the hypervisor.

Jan



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.