[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH] xen/mm: move adjustment of claimed pages counters on allocation


  • To: Roger Pau Monné <roger.pau@xxxxxxxxxx>
  • From: Jan Beulich <jbeulich@xxxxxxxx>
  • Date: Mon, 29 Dec 2025 09:12:38 +0100
  • Autocrypt: addr=jbeulich@xxxxxxxx; keydata= xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A nAuWpQkjM1ASeQwSHEeAWPgskBQL
  • Cc: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, Anthony PERARD <anthony.perard@xxxxxxxxxx>, Michal Orzel <michal.orzel@xxxxxxx>, Julien Grall <julien@xxxxxxx>, Stefano Stabellini <sstabellini@xxxxxxxxxx>, xen-devel@xxxxxxxxxxxxxxxxxxxx
  • Delivery-date: Mon, 29 Dec 2025 08:13:10 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On 23.12.2025 18:55, Roger Pau Monné wrote:
> On Tue, Dec 23, 2025 at 11:59:54AM +0100, Jan Beulich wrote:
>> On 23.12.2025 09:15, Roger Pau Monne wrote:
>>> The current logic splits the update of the amount of available memory in
>>> the system (total_avail_pages) and pending claims into two separately
>>> locked regions.  This leads to a window between counters adjustments where
>>> the result of total_avail_pages - outstanding_claims doesn't reflect the
>>> real amount of free memory available, and can return a negative value due
>>> to total_avail_pages having been updated ahead of outstanding_claims.
>>>
>>> Fix by adjusting outstanding_claims and d->outstanding_pages in the same
>>> place where total_avail_pages is updated.  This can possibly lead to the
>>> pages failing to be assigned to the domain later, after they have already
>>> been subtracted from the claimed amount.  Ultimately this would result in a
>>> domain losing part of it's claim, but that's better than the current skew
>>> between total_avail_pages and outstanding_claims.
>>
>> For the system as a whole - yes. For just the domain rather not. It may be
>> a little cumbersome, but can't we restore the claim from the error path
>> after failed assignment? (In fact the need to (optionally) pass a domain
>> into free_heap_pages() would improve symmetry with alloc_heap_pages().)
> 
> Passing a domain parameter to free_heap_pages() is not that much of an
> issue.  The problem with restoring the claim value on failure to
> assign is the corner cases.  For example consider an allocation that
> depletes the existing claim, allocating more than what was left to be
> claimed.  Restoring the previous claim value on failure to assign to
> the domain would be tricky.  It would require returning the consumed
> claim from alloc_heap_pages(), so that alloc_domheap_pages() could
> restore it on failure to assign.
> 
> However, I was looking at the possible failure causes of
> assign_pages() and I'm not sure there's much point in attempting to
> restore the claimed amount.  Current cases where assign_pages() will
> fail:
> 
>  - Domain is dying: keeping the claim is irrelevant, the domain is
>    dying anyway.
> 
>  - tot_pages > max_pages: inconsistent domain state, and a claim
>    should never be bigger than max_pages.
> 
>  - tot_pages + alloc > max_pages: only possible if alloc is using
>    claim pages plus unclaimed ones, as the claim cannot be bigger than
>    max_pages.  Such alloc is doomed to fail anyway, and would point at
>    the claim value being incorrectly set.
> 
>  - tot_pages + alloc < alloc: overflow of tot_pages, should never
>    happen with claimed pages as tot_pages <= max_pages, and claim <=
>    max_pages.
> 
> However that only covers current code in assign_pages(), there's no
> guarantee that future code might introduce new failure cases.
> 
> Having said all that, I have a prototype that restores the claimed
> amount that I could send to the list.  It involves adding two extra
> parameters to free_heap_pages(): the domain and the claim amount to
> restore.  It's not super-nice, but I was expecting it to be worse.

With the justification above I'd be okay with the claim not being
restored upon failure; the extra logic could then be added if and when
an error case appears which would make it desirable to restore the
claim.

Jan



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.