[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v6 1/5] x86/mem_sharing: reorder when pages are unlocked and released


  • To: Tamas K Lengyel <tamas@xxxxxxxxxxxxx>
  • From: Jan Beulich <JBeulich@xxxxxxxx>
  • Date: Thu, 18 Jul 2019 13:33:20 +0000
  • Accept-language: en-US
  • Arc-authentication-results: i=1; mx.microsoft.com 1;spf=pass smtp.mailfrom=suse.com;dmarc=pass action=none header.from=suse.com;dkim=pass header.d=suse.com;arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=POxQVjRTtYuatpM93ckuMERUOnhI6rWQFIzWV5cieBs=; b=NF0kvS0wdS8tUR8N/ykTY3vC+pjTUWLa7TW6o7X/D7lvzbbGS6LTyFOvDXi/5vdK9IzYRCfh2+9m2E+g+xd5eNs/BtxkJb7PO09kbjRZth2DJleVL31YZLLteX7QVVRksUZP2i0sxhUwRJ3HN6vsKwD0lRwLIHni1Z7e1XK7AYMm1pKMFjul8VW0Hzhz/eEc/MTFw1dhYvhM58JipZo0+TqnjwblAjV0MjuHRkOvrobglwlfj5OhW5tx5SEiiEH0oEpvR3NfJGWDy9etHLhqwIuLROcrbpnapb90gTc5opxOBl+kVMHBGfcL0HWTqmEWxra7a/5Lu9/dELGRQFbKzQ==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=K9Aw/qQ6uVTK5IQF+coEJ30zp83z8SzmMyqo4IstxgAhrUnaKvB+HRKdCNQE247n9QwctW4i5WjQq0w0BeN5TJOPidx6qWae3RzheQ19q+INBR2IRso8KLyqnEPVBTxukC/9y1MZpyyd3c1W90dZ9geNLHtob7iCVwqLxp9AqSSNt7FQsOK7ewXuQ1coDWzJNcKk9Qc8BSwfkFxgn1GVg/PkC0ho1ybaRvDkNbRaRXWjb2IXd8lAkgqXkjVQ2yHE5a1U84ssrq8Nn+7F1GHmXjHJPOx7Tc3uy9LJg7wzLMYplZDUm8FAof7FrGJ7Xo0/4dF5y0kFv+IVQqQ03IQ21g==
  • Authentication-results: spf=none (sender IP is ) smtp.mailfrom=JBeulich@xxxxxxxx;
  • Cc: George Dunlap <george.dunlap@xxxxxxxxxxxxx>, Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, Wei Liu <wei.liu2@xxxxxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>, Roger Pau Monne <roger.pau@xxxxxxxxxx>
  • Delivery-date: Thu, 18 Jul 2019 13:33:57 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>
  • Thread-index: AQHVPNaoMN4/6VqHlEm91bwrx18tx6bQMluAgAAkhnuAAARMgIAAAL3BgAAFHgA=
  • Thread-topic: [Xen-devel] [PATCH v6 1/5] x86/mem_sharing: reorder when pages are unlocked and released

On 18.07.2019 15:13, Tamas K Lengyel wrote:
> On Thu, Jul 18, 2019 at 7:12 AM Jan Beulich <JBeulich@xxxxxxxx> wrote:
>>
>> On 18.07.2019 14:55, Tamas K Lengyel wrote:
>>> On Thu, Jul 18, 2019 at 4:47 AM Jan Beulich <JBeulich@xxxxxxxx> wrote:
>>>> On 17.07.2019 21:33, Tamas K Lengyel wrote:
>>>>> @@ -900,6 +895,7 @@ static int share_pages(struct domain *sd, gfn_t sgfn, 
>>>>> shr_handle_t sh,
>>>>>         p2m_type_t smfn_type, cmfn_type;
>>>>>         struct two_gfns tg;
>>>>>         struct rmap_iterator ri;
>>>>> +    unsigned long put_count = 0;
>>>>>
>>>>>         get_two_gfns(sd, sgfn, &smfn_type, NULL, &smfn,
>>>>>                      cd, cgfn, &cmfn_type, NULL, &cmfn, 0, &tg);
>>>>> @@ -964,15 +960,6 @@ static int share_pages(struct domain *sd, gfn_t 
>>>>> sgfn, shr_handle_t sh,
>>>>>             goto err_out;
>>>>>         }
>>>>>
>>>>> -    /* Acquire an extra reference, for the freeing below to be safe. */
>>>>> -    if ( !get_page(cpage, dom_cow) )
>>>>> -    {
>>>>> -        ret = -EOVERFLOW;
>>>>> -        mem_sharing_page_unlock(secondpg);
>>>>> -        mem_sharing_page_unlock(firstpg);
>>>>> -        goto err_out;
>>>>> -    }
>>>>> -
>>>>>         /* Merge the lists together */
>>>>>         rmap_seed_iterator(cpage, &ri);
>>>>>         while ( (gfn = rmap_iterate(cpage, &ri)) != NULL)
>>>>> @@ -984,13 +971,14 @@ static int share_pages(struct domain *sd, gfn_t 
>>>>> sgfn, shr_handle_t sh,
>>>>>              * Don't change the type of rmap for the client page. */
>>>>>             rmap_del(gfn, cpage, 0);
>>>>>             rmap_add(gfn, spage);
>>>>> -        put_page_and_type(cpage);
>>>>> +        put_count++;
>>>>>             d = get_domain_by_id(gfn->domain);
>>>>>             BUG_ON(!d);
>>>>>             BUG_ON(set_shared_p2m_entry(d, gfn->gfn, smfn));
>>>>>             put_domain(d);
>>>>>         }
>>>>>         ASSERT(list_empty(&cpage->sharing->gfns));
>>>>> +    BUG_ON(!put_count);
>>>>>
>>>>>         /* Clear the rest of the shared state */
>>>>>         page_sharing_dispose(cpage);
>>>>> @@ -1001,7 +989,9 @@ static int share_pages(struct domain *sd, gfn_t 
>>>>> sgfn, shr_handle_t sh,
>>>>>
>>>>>         /* Free the client page */
>>>>>         put_page_alloc_ref(cpage);
>>>>> -    put_page(cpage);
>>>>> +
>>>>> +    while ( put_count-- )
>>>>> +        put_page_and_type(cpage);
>>>>>
>>>>>         /* We managed to free a domain page. */
>>>>>         atomic_dec(&nr_shared_mfns);
>>>>> @@ -1165,19 +1155,13 @@ int __mem_sharing_unshare_page(struct domain *d,
>>>>>         {
>>>>>             if ( !last_gfn )
>>>>>                 mem_sharing_gfn_destroy(page, d, gfn_info);
>>>>> -        put_page_and_type(page);
>>>>> +
>>>>>             mem_sharing_page_unlock(page);
>>>>> +
>>>>>             if ( last_gfn )
>>>>> -        {
>>>>> -            if ( !get_page(page, dom_cow) )
>>>>> -            {
>>>>> -                put_gfn(d, gfn);
>>>>> -                domain_crash(d);
>>>>> -                return -EOVERFLOW;
>>>>> -            }
>>>>>                 put_page_alloc_ref(page);
>>>>> -            put_page(page);
>>>>> -        }
>>>>> +
>>>>> +        put_page_and_type(page);
>>>>>             put_gfn(d, gfn);
>>>>>
>>>>>             return 0;
>>>>
>>>> ... this (main, as I guess by the title) part of the change? I think
>>>> you want to explain what was wrong here and/or why the new arrangement
>>>> is better. (I'm sorry, I guess it was this way on prior versions
>>>> already, but apparently I didn't notice.)
>>>
>>> It's what the patch message says - calling put_page_and_type before
>>> mem_sharing_page_unlock can cause a deadlock. Since now we are now
>>> holding a reference to the page till the end there is no need for the
>>> extra get_page/put_page logic when we are dealing with the last_gfn.
>>
>> The title says "reorder" without any "why".
> 
> Yes, I can't reasonably fit "Calling _put_page_type while also holding
> the page_lock for that page can cause a deadlock." into the title. So
> it's spelled out in the patch message.

Of course not. And I realize _part_ of the changes is indeed what the
title says (although for share_pages() that's not obvious from the
patch alone). But you do more: You also avoid acquiring an extra
reference in share_pages().

And since you made me look at the code again: If put_page() is unsafe
with a lock held, how come the get_page_and_type() in share_pages()
is safe with two such locks held? If it really is, it surely would be
worthwhile to state in the description. There's another such instance
in mem_sharing_add_to_physmap() (plus a get_page()), and also one
where put_page_and_type() gets called with such a lock held (afaics).

Jan
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.