[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 3 of 5] Rework locking in the PoD layer


  • To: "Tim Deegan" <tim@xxxxxxx>
  • From: "Andres Lagar-Cavilla" <andres@xxxxxxxxxxxxxxxx>
  • Date: Thu, 2 Feb 2012 06:04:06 -0800
  • Cc: george.dunlap@xxxxxxxxxxxxx, andres@xxxxxxxxxxxxxx, xen-devel@xxxxxxxxxxxxxxxxxxx, keir.xen@xxxxxxxxx, adin@xxxxxxxxxxxxxx
  • Delivery-date: Thu, 02 Feb 2012 14:04:20 +0000
  • Domainkey-signature: a=rsa-sha1; c=nofws; d=lagarcavilla.org; h=message-id :in-reply-to:references:date:subject:from:to:cc:reply-to :mime-version:content-type:content-transfer-encoding; q=dns; s= lagarcavilla.org; b=Lr+UEMrlxteQPNNaU+1MASPKj14sl+b0i2YQW/KT3wV7 0CDg2u0e5iTWZOvhkBpqLq4Xl4vWcNgRWT1FF4H3HyL7QFnUuI/zRB8QuBOl1prl 40cBQjpBVMWi0c8O6L7cquJOj70SadUtVaM6Q/FPG3BNv+mQSRkvak2Xa48d4zo=
  • List-id: Xen developer discussion <xen-devel.lists.xensource.com>

> At 14:56 -0500 on 01 Feb (1328108167), Andres Lagar-Cavilla wrote:
>>  xen/arch/x86/mm/mm-locks.h |   10 ++++
>>  xen/arch/x86/mm/p2m-pod.c  |  112
>> ++++++++++++++++++++++++++------------------
>>  xen/arch/x86/mm/p2m-pt.c   |    1 +
>>  xen/arch/x86/mm/p2m.c      |    8 ++-
>>  xen/include/asm-x86/p2m.h  |   27 +++-------
>>  5 files changed, 93 insertions(+), 65 deletions(-)
>>
>>
>> The PoD layer has a comples locking discipline. It relies on the
>> p2m being globally locked, and it also relies on the page alloc
>> lock to protect some of its data structures. Replace this all by an
>> explicit pod lock: per p2m, order enforced.
>>
>> Three consequences:
>>     - Critical sections in the pod code protected by the page alloc
>>       lock are now reduced to modifications of the domain page list.
>>     - When the p2m lock becomes fine-grained, there are no
>>       assumptions broken in the PoD layer.
>>     - The locking is easier to understand.
>>
>> Signed-off-by: Andres Lagar-Cavilla <andres@xxxxxxxxxxxxxxxx>
>
> This needs an Ack from George, too.  Also:
>
>> @@ -922,6 +929,12 @@ p2m_pod_emergency_sweep(struct p2m_domai
>>      limit = (start > POD_SWEEP_LIMIT) ? (start - POD_SWEEP_LIMIT) : 0;
>>
>>      /* FIXME: Figure out how to avoid superpages */
>> +    /* NOTE: Promote to globally locking the p2m. This will get
>> complicated
>> +     * in a fine-grained scenario. Even if we're to lock each gfn
>> +     * individually we must be careful about recursion limits and
>> +     * POD_SWEEP_STRIDE. This is why we don't enforce deadlock
>> constraints
>> +     * between p2m and pod locks */
>> +    p2m_lock(p2m);
>
> That's a scary comment.  It looks to me as if the mm-locks.h mechanism
> _does_ enforce those constraints - am I missing something?

The problem is that the recurse count of a spinlock is not particularly
wide. So if you have a loop that does a lot of nested get_gfn*, you may
overflow.

The funny bit is that we do enforce ordering, so that part of the comment
is stale. Will update.

Andres
>
> Cheers,
>
> Tim.
>



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.