|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v2] xen/mm: Avoid assuming the page is inuse in assign_pages()
> -----Original Message-----
> From: Julien Grall <julien@xxxxxxx>
> Sent: 06 February 2020 10:39
> To: xen-devel@xxxxxxxxxxxxxxxxxxxx
> Cc: julien@xxxxxxx; Durrant, Paul <pdurrant@xxxxxxxxxxxx>; Grall, Julien
> <jgrall@xxxxxxxxxx>
> Subject: [PATCH v2] xen/mm: Avoid assuming the page is inuse in
> assign_pages()
>
> From: Julien Grall <jgrall@xxxxxxxxxx>
>
> At the moment, assign_pages() on the page to be inuse (PGC_state_inuse)
> and the state value to be 0.
>
> However, the code may race with the page offlining code (see
> offline_page()). Depending on the ordering, the page may be in offlining
> state (PGC_state_offlining) before it is assigned to a domain.
>
> On debug build, this may result to hit the assert or just clobber the
> state. On non-debug build, the state will get clobbered.
>
> Incidentally the flag PGC_broken will get clobbered as well.
>
> Grab the heap_lock to prevent a race with offline_page() and keep the
> state and broken flag around.
>
> Signed-off-by: Julien Grall <jgrall@xxxxxxxxxx>
This seems like a reasonable change. I guess having assign_pages() take the
global lock is no more problem than its existing call to
domain_adjust_tot_pages() which also takes the same lock.
Paul
>
> ---
> Changes in v2:
> - Superseed <20200204133357.32101-1-julien@xxxxxxx>
> - Fix the race with offline_page()
> ---
> xen/common/page_alloc.c | 16 ++++++++++++++--
> 1 file changed, 14 insertions(+), 2 deletions(-)
>
> diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
> index 97902d42c1..a684dbf37c 100644
> --- a/xen/common/page_alloc.c
> +++ b/xen/common/page_alloc.c
> @@ -2283,15 +2283,27 @@ int assign_pages(
> get_knownalive_domain(d);
> }
>
> + spin_lock(&heap_lock);
> for ( i = 0; i < (1 << order); i++ )
> {
> + /*
> + * We should only be here if the page is inuse or offlining.
> + * The latter happen if we race with mark_page_offline() as we
> + * don't hold the heap_lock.
> + */
> + ASSERT(page_state_is(&pg[i], inuse) ||
> + page_state_is(&pg[i], offlining));
> + ASSERT(!(pg[i].count_info & ~(PGC_state | PGC_broken)));
> ASSERT(page_get_owner(&pg[i]) == NULL);
> - ASSERT(!pg[i].count_info);
> page_set_owner(&pg[i], d);
> smp_wmb(); /* Domain pointer must be visible before updating
> refcnt. */
> - pg[i].count_info = PGC_allocated | 1;
> +
> + pg[i].count_info &= PGC_state | PGC_broken;
> + pg[i].count_info |= PGC_allocated | 1;
> +
> page_list_add_tail(&pg[i], &d->page_list);
> }
> + spin_unlock(&heap_lock);
>
> out:
> spin_unlock(&d->page_alloc_lock);
> --
> 2.17.1
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |