[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v6 1/5] x86/mem_sharing: reorder when pages are unlocked and released


  • To: Tamas K Lengyel <tamas@xxxxxxxxxxxxx>
  • From: Jan Beulich <JBeulich@xxxxxxxx>
  • Date: Thu, 18 Jul 2019 10:46:17 +0000
  • Accept-language: en-US
  • Arc-authentication-results: i=1; mx.microsoft.com 1;spf=pass smtp.mailfrom=suse.com;dmarc=pass action=none header.from=suse.com;dkim=pass header.d=suse.com;arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=V+R6r7SOqa+xdQfEI0FxNrOPS/4+EIr4nzu1d9X4rfU=; b=mV07mX4h2Euj41ithgF18oeOptFQ4Hk9VSBnWnwLPGDTQmeUMSHnw63BgXqezWsc/2dLFUQ0PNljT7BIlY4jYoANQNqm3yKpTOA42kIaltaC45l1K1kWVDJgNNvPjpVwGOjlaCEH6WDz7uOGa7wm669IQt7KbMwKoV4tsAjBA1HTLCDOdNqDEZlxRjXYSobNzuhQjnDYa9WlS0N0JikSKv1XVCoNKqD7oST2gjogcdbnEROlahy9Y4m/fuJAUZWgAJ4zeSkoLIkIz54j63C+BdUkk6LJiD2hI86/bMKeaHrJ4r+rNvM4sOzVApqwNRbDkiyjFVTyh/cl+Ja6+3pwFQ==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=Fp/wAeKAFTTvGmxFZp2ZZiEPUnDM6bUL2n/xd0sJAvGDfXH6JjI7WysChQJICn3kffU8oc6QPIGYdiLm0FHbHSuqRG0kdWi7CvlIOEaUVITebRFE8CM2n8aa8fBCg6O0YErtsHLOoH7CZbbulNMsA8l4qENXWc2fPzKNQlk40dvZb60soz7ke2f+taHQ0mbkcs7yJMgFet2RBxOWhFi12lB0M4E/f1va+AqLxlOmadtEHI1RvR+AHwEV9tD0Xk3LnmLF6FObpEM5ZLN9GrSZy73Z4FyssZkQzYN+NmsTnvdXy7JUvL0lxPU7vw+yqoF0AyNHakQZ1VJMJ1aEdomANA==
  • Authentication-results: spf=none (sender IP is ) smtp.mailfrom=JBeulich@xxxxxxxx;
  • Cc: George Dunlap <george.dunlap@xxxxxxxxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>, Roger Pau Monne <roger.pau@xxxxxxxxxx>, Wei Liu <wei.liu2@xxxxxxxxxx>, Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
  • Delivery-date: Thu, 18 Jul 2019 10:47:47 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>
  • Thread-index: AQHVPNaoMN4/6VqHlEm91bwrx18tx6bQMluA
  • Thread-topic: [PATCH v6 1/5] x86/mem_sharing: reorder when pages are unlocked and released

On 17.07.2019 21:33, Tamas K Lengyel wrote:
> Calling _put_page_type while also holding the page_lock
> for that page can cause a deadlock.
> 
> The comment being dropped is incorrect since it's now out-of-date.
> 
> Signed-off-by: Tamas K Lengyel <tamas@xxxxxxxxxxxxx>

The description covers ...

> --- a/xen/arch/x86/mm/mem_sharing.c
> +++ b/xen/arch/x86/mm/mem_sharing.c
> @@ -648,10 +648,6 @@ static int page_make_private(struct domain *d, struct 
> page_info *page)
>           return -EBUSY;
>       }
>   
> -    /* We can only change the type if count is one */
> -    /* Because we are locking pages individually, we need to drop
> -     * the lock here, while the page is typed. We cannot risk the
> -     * race of page_unlock and then put_page_type. */
>       expected_type = (PGT_shared_page | PGT_validated | PGT_locked | 2);
>       if ( page->u.inuse.type_info != expected_type )
>       {
> @@ -660,12 +656,11 @@ static int page_make_private(struct domain *d, struct 
> page_info *page)
>           return -EEXIST;
>       }
>   
> +    mem_sharing_page_unlock(page);
> +
>       /* Drop the final typecount */
>       put_page_and_type(page);
>   
> -    /* Now that we've dropped the type, we can unlock */
> -    mem_sharing_page_unlock(page);
> -
>       /* Change the owner */
>       ASSERT(page_get_owner(page) == dom_cow);
>       page_set_owner(page, d);

all of the above. But what about ...

> @@ -900,6 +895,7 @@ static int share_pages(struct domain *sd, gfn_t sgfn, 
> shr_handle_t sh,
>       p2m_type_t smfn_type, cmfn_type;
>       struct two_gfns tg;
>       struct rmap_iterator ri;
> +    unsigned long put_count = 0;
>   
>       get_two_gfns(sd, sgfn, &smfn_type, NULL, &smfn,
>                    cd, cgfn, &cmfn_type, NULL, &cmfn, 0, &tg);
> @@ -964,15 +960,6 @@ static int share_pages(struct domain *sd, gfn_t sgfn, 
> shr_handle_t sh,
>           goto err_out;
>       }
>   
> -    /* Acquire an extra reference, for the freeing below to be safe. */
> -    if ( !get_page(cpage, dom_cow) )
> -    {
> -        ret = -EOVERFLOW;
> -        mem_sharing_page_unlock(secondpg);
> -        mem_sharing_page_unlock(firstpg);
> -        goto err_out;
> -    }
> -
>       /* Merge the lists together */
>       rmap_seed_iterator(cpage, &ri);
>       while ( (gfn = rmap_iterate(cpage, &ri)) != NULL)
> @@ -984,13 +971,14 @@ static int share_pages(struct domain *sd, gfn_t sgfn, 
> shr_handle_t sh,
>            * Don't change the type of rmap for the client page. */
>           rmap_del(gfn, cpage, 0);
>           rmap_add(gfn, spage);
> -        put_page_and_type(cpage);
> +        put_count++;
>           d = get_domain_by_id(gfn->domain);
>           BUG_ON(!d);
>           BUG_ON(set_shared_p2m_entry(d, gfn->gfn, smfn));
>           put_domain(d);
>       }
>       ASSERT(list_empty(&cpage->sharing->gfns));
> +    BUG_ON(!put_count);
>   
>       /* Clear the rest of the shared state */
>       page_sharing_dispose(cpage);
> @@ -1001,7 +989,9 @@ static int share_pages(struct domain *sd, gfn_t sgfn, 
> shr_handle_t sh,
>   
>       /* Free the client page */
>       put_page_alloc_ref(cpage);
> -    put_page(cpage);
> +
> +    while ( put_count-- )
> +        put_page_and_type(cpage);
>   
>       /* We managed to free a domain page. */
>       atomic_dec(&nr_shared_mfns);
> @@ -1165,19 +1155,13 @@ int __mem_sharing_unshare_page(struct domain *d,
>       {
>           if ( !last_gfn )
>               mem_sharing_gfn_destroy(page, d, gfn_info);
> -        put_page_and_type(page);
> +
>           mem_sharing_page_unlock(page);
> +
>           if ( last_gfn )
> -        {
> -            if ( !get_page(page, dom_cow) )
> -            {
> -                put_gfn(d, gfn);
> -                domain_crash(d);
> -                return -EOVERFLOW;
> -            }
>               put_page_alloc_ref(page);
> -            put_page(page);
> -        }
> +
> +        put_page_and_type(page);
>           put_gfn(d, gfn);
>   
>           return 0;

... this (main, as I guess by the title) part of the change? I think
you want to explain what was wrong here and/or why the new arrangement
is better. (I'm sorry, I guess it was this way on prior versions
already, but apparently I didn't notice.)

Jan
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.