|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH 2/2] xen/mm: do not assign pages to a domain until they are scrubbed
On Wed, Mar 25, 2026 at 05:26:01PM +0100, Roger Pau Monné wrote:
> On Wed, Mar 25, 2026 at 03:56:05PM +0100, Jan Beulich wrote:
> > On 25.03.2026 11:08, Roger Pau Monne wrote:
> > > * Disallow XENMEM_decrease_reservation until the domain has finished
> > > creation would fix the issue of pages being freed while pending scrub,
> > > but it's not clear there might be other usages that would be
> > > problematic,
> > > as get_page() on non-scrubbed pages would still return success.
> >
> > I agree this is of concern.
> >
> > > --- a/xen/common/memory.c
> > > +++ b/xen/common/memory.c
> > > @@ -388,6 +388,12 @@ static void populate_physmap(struct memop_args *a)
> > > goto out;
> > > }
> > > }
> > > +
> > > + if ( assign_page(page, a->extent_order, d, memflags)
> > > )
> > > + {
> > > + free_domheap_pages(page, a->extent_order);
> >
> > The pages don't have an owner set yet, so that function will go straight
> > to free_heap_pages(), needlessly passing "true" as last argument. Correct,
> > but (for large pages, which the stashing is about) highly inefficient.
>
> My bad, I was sure I was using the same freeing function as
> alloc_domheap_pages() on failure to assign, but I clearly wasn't. I
> will switch to using free_heap_pages().
Coming back to this, I can export free_heap_pages(), but then the call
would also unconditionally have need_scrub == true, as the pages have
been allocated without scrubbing. So I don't see much benefit of
using free_heap_pages(), the more that it requires making such
function global then.
Thanks, Roger.
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |