[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v2 1/2] xen/mm: move adjustment of claimed pages counters on allocation


  • To: Jan Beulich <jbeulich@xxxxxxxx>
  • From: Roger Pau Monné <roger.pau@xxxxxxxxxx>
  • Date: Wed, 7 Jan 2026 15:27:16 +0100
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com; dkim=pass header.d=citrix.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=vVMmgP/29r9K1JzbcGoAGvMJ+s5vsTxunlavN4xtz0U=; b=bmBTiMtOwFg8H7QB5XrWMkqHrmgJNdqZTDld2WsEE4Nm76r4w77HVvLRiZQ2Dp3rRmSa2BVEvo+Mr+ekGlgJ9iRgiSrpQMH1KdsJqp30Dze9D7NFezbX8/NNOs4ZpwDhkWN1Kvbb36K8qswvuipGqtdGddWNvVdorxOfHDSWY5fq9Cr7ZDevrWB7mNnb8YTYRL7KRnZzmDTRBPh4lrtrahElC9759ojpG4rh6M+MnCLFjXminoOkuvuQCtMH9V0o/u9WRCL6xpXRlBQ3DKhcxDD//d5tM0H+Vear2WeID+wGPhFnUxOD/y/fuNtaSO9RvgqHjEDB01ldSek1Uc6fCg==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=kG1aq1VUX7OkPVFF2mPLkMCrXJs0i8MJMCvPqvmNTGmlHGRS+AoeWs5NFtpMeLQrrnZJXnZ/GGikJxtU98ZM7ciZpjlL2nFiUwjORhyqYn539UUWGQfvhmrGC7a5pZWcFK6ATz4aUQ2NAk4XOEgZTDW5ggjzSPcrLOKW7quO6e4LJVewrGe5LmbnmlkC/fkNZ6/PX/nu/D1WmU3wAxewgPW3vnJXwLivsG1f66I0R6QggIheUTTlUSGTQvWt5fe8JwChjsSVLneEzQzEVomNwdANP8cnvmqD3E8fc9N60fVZkyJBscOlju+w1Im3zMmkyM2lewBVVHIPZ8klcjgjXg==
  • Authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=citrix.com;
  • Cc: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, Anthony PERARD <anthony.perard@xxxxxxxxxx>, Michal Orzel <michal.orzel@xxxxxxx>, Julien Grall <julien@xxxxxxx>, Stefano Stabellini <sstabellini@xxxxxxxxxx>, xen-devel@xxxxxxxxxxxxxxxxxxxx
  • Delivery-date: Wed, 07 Jan 2026 14:27:32 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On Mon, Dec 29, 2025 at 09:44:49AM +0100, Jan Beulich wrote:
> On 24.12.2025 23:31, Andrew Cooper wrote:
> > On 24/12/2025 7:40 pm, Roger Pau Monne wrote:
> >> The current logic splits the update of the amount of available memory in
> >> the system (total_avail_pages) and pending claims into two separately
> >> locked regions.  This leads to a window between counters adjustments where
> >> the result of total_avail_pages - outstanding_claims doesn't reflect the
> >> real amount of free memory available, and can return a negative value due
> >> to total_avail_pages having been updated ahead of outstanding_claims.
> >>
> >> Fix by adjusting outstanding_claims and d->outstanding_pages in the same
> >> place where total_avail_pages is updated.  Note that accesses to
> >> d->outstanding_pages is protected by the global heap_lock, just like
> >> total_avail_pages or outstanding_claims.  Add a comment to the field
> >> declaration, and also adjust the comment at the top of
> >> domain_set_outstanding_pages() to be clearer in that regard.
> >>
> >> Finally, due to claims being adjusted ahead of pages having been assigned
> >> to the domain, add logic to re-gain the claim in case assign_page() fails.
> >> Otherwise the page is freed and the claimed amount would be lost.
> >>
> >> Fixes: 65c9792df600 ("mmu: Introduce XENMEM_claim_pages (subop of memory 
> >> ops)")
> >> Signed-off-by: Roger Pau Monné <roger.pau@xxxxxxxxxx>
> >> ---
> >> Changes since v1:
> >>  - Regain the claim if allocated page cannot be assigned to the domain.
> >>  - Adjust comments regarding d->outstanding_pages being protected by the
> >>    heap_lock (instead of the d->page_alloc_lock).
> > 
> > This is a complicated patch, owing to the churn from adding extra
> > parameters.
> > 
> > I've had a go splitting this patch in half.  First to adjust the
> > parameters, and second the bugfix. 
> > https://gitlab.com/xen-project/hardware/xen-staging/-/commits/andrew/roger-claims
> > 
> > I think the result is nicer to follow.  Thoughts?
> 
> Question (from the unfinished v1 thread) being whether we actually need the
> restoration, given Roger's analysis of the affected failure cases.

Let's leave it out then.  It's certainly possible to add the claimed
amount back on failure, but given the intended usage of claims and the
failures cases of assign_pages() I don't think it's worth doing it
now.  It adds complexity for no real value.  A domain that fails in
assing_pages() during domain creation physmap population is doomed to
be destroyed anyway, and hence possibly dropping (part of) the claim
is not relevant.

I will send a new version of the series with the approach used on v1.

Thanks, Roger.



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.