[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 1 of 5] Make p2m lookups fully synchronized wrt modifications


  • To: "Tim Deegan" <tim@xxxxxxx>
  • From: "Andres Lagar-Cavilla" <andres@xxxxxxxxxxxxxxxx>
  • Date: Thu, 2 Feb 2012 06:01:26 -0800
  • Cc: george.dunlap@xxxxxxxxxxxxx, andres@xxxxxxxxxxxxxx, xen-devel@xxxxxxxxxxxxxxxxxxx, keir.xen@xxxxxxxxx, adin@xxxxxxxxxxxxxx
  • Delivery-date: Thu, 02 Feb 2012 14:01:46 +0000
  • Domainkey-signature: a=rsa-sha1; c=nofws; d=lagarcavilla.org; h=message-id :in-reply-to:references:date:subject:from:to:cc:reply-to :mime-version:content-type:content-transfer-encoding; q=dns; s= lagarcavilla.org; b=YgU0DeZFTUHGFodIsjU5FGXeHTiqeHGGfpkj2SYQ+UQQ Bap/OjvE41bQsdb9FNZ7G34VxSStJguJkxk77uYeDdBpNQkeYWyyktkH89oFBquk xw31lz3KN+8/Y87uiRV/xDiqVuMgS2XEOVMEn0s/SXaWJYYJWWZfENCogxeThFg=
  • List-id: Xen developer discussion <xen-devel.lists.xensource.com>

> At 14:56 -0500 on 01 Feb (1328108165), Andres Lagar-Cavilla wrote:
>>  xen/arch/x86/mm/hap/hap.c  |   6 +++++
>>  xen/arch/x86/mm/mm-locks.h |  40 ++++++++++++++++++++++++--------------
>>  xen/arch/x86/mm/p2m.c      |  21 ++++++++++++++++++-
>>  xen/include/asm-x86/p2m.h  |  47
>> ++++++++++++++++++++++++++++-----------------
>>  4 files changed, 79 insertions(+), 35 deletions(-)
>>
>>
>> We achieve this by locking/unlocking the global p2m_lock in get/put_gfn.
>>
>> The lock is always taken recursively, as there are many paths that
>> do get_gfn, and later, another attempt at grabbing the p2m_lock
>>
>> The lock is not taken for shadow lookups, as the testing needed to rule
>> out the
>> possibility of locking inversions and deadlock has not yet been carried
>> out for
>> shadow-paging. This is tolerable as long as shadows do not gain support
>> for
>> paging or sharing.
>
> Are you aware of any inversions or are you just suggesting there might
> be some left?

I'm not aware of any inversions. A previously posted patch cleans up all
that I could see. But I have not tested shadows, so I don't feel
comfortable about enabling it just yet.

>
>> HAP (EPT) lookups and all modifications do take the lock.
>>
>> Signed-off-by: Andres Lagar-Cavilla <andres@xxxxxxxxxxxxxxxx>
>>
>> diff -r 9a55109e4d7e -r 223512f9fb5b xen/arch/x86/mm/hap/hap.c
>> --- a/xen/arch/x86/mm/hap/hap.c
>> +++ b/xen/arch/x86/mm/hap/hap.c
>> @@ -786,7 +786,12 @@ hap_paging_get_mode(struct vcpu *v)
>>  static void hap_update_paging_modes(struct vcpu *v)
>>  {
>>      struct domain *d = v->domain;
>> +    unsigned long cr3_gfn = v->arch.hvm_vcpu.guest_cr[3];
>> +    p2m_type_t t;
>>
>> +    /* We hold onto the cr3 as it may be modified later, and
>> +     * we need to respect lock ordering */
>> +    (void)get_gfn(d, cr3_gfn, &t);
>
> Don't you need to check for errors?
Good to have, yes. As in, bail out if !p2m_is_ram || !mfn_valid.

>
>>      paging_lock(d);
>>
>>      v->arch.paging.mode = hap_paging_get_mode(v);
>> @@ -803,6 +808,7 @@ static void hap_update_paging_modes(stru
>>      hap_update_cr3(v, 0);
>>
>>      paging_unlock(d);
>> +    put_gfn(d, cr3_gfn);
>>  }
>>
>>  #if CONFIG_PAGING_LEVELS == 3
>> diff -r 9a55109e4d7e -r 223512f9fb5b xen/arch/x86/mm/mm-locks.h
>> --- a/xen/arch/x86/mm/mm-locks.h
>> +++ b/xen/arch/x86/mm/mm-locks.h
>> @@ -144,6 +144,31 @@ static inline void mm_enforce_order_unlo
>>   *
>> *
>>   ************************************************************************/
>>
>> +declare_mm_lock(nestedp2m)
>> +#define nestedp2m_lock(d)   mm_lock(nestedp2m,
>> &(d)->arch.nested_p2m_lock)
>> +#define nestedp2m_unlock(d) mm_unlock(&(d)->arch.nested_p2m_lock)
>> +
>> +/* P2M lock (per-p2m-table)
>> + *
>> + * This protects all queries and updates to the p2m table.
>> + *
>> + * A note about ordering:
>> + *   The order established here is enforced on all mutations of a p2m.
>> + *   For lookups, the order established here is enforced only for hap
>> + *   domains (1. shadow domains present a few nasty inversions;
>> + *            2. shadow domains do not support paging and sharing,
>> + *               the main sources of dynamic p2m mutations)
>> + *
>> + * The lock is recursive as it is common for a code path to look up a
>> gfn
>> + * and later mutate it.
>> + */
>> +
>> +declare_mm_lock(p2m)
>> +#define p2m_lock(p)           mm_lock_recursive(p2m, &(p)->lock)
>> +#define p2m_lock_recursive(p) mm_lock_recursive(p2m, &(p)->lock)
>> +#define p2m_unlock(p)         mm_unlock(&(p)->lock)
>> +#define p2m_locked_by_me(p)   mm_locked_by_me(&(p)->lock)
>> +
>>  /* Sharing per page lock
>>   *
>>   * This is an external lock, not represented by an mm_lock_t. The
>> memory
>
> Since you've just reversed the locking order between this page-sharing
> lock and the p2m lock, you need to update the comment that describes
> it.  Also, presumably, the code that it describes?
The comment might be stale, certainly, will look into it.

The code is just fine, per testing. The next patch does a proper cleanup:
the "complicated condition" is grabbing a p2m lock nested inside a
page-sharing critical section. But thanks to this patch, the p2m lock will
have been taken before, so no race (hope that makes sense).

Bottomline is, correct with this patch, clean with next.

Thanks!
Andres
>
> Cheers,
>
> Tim.
>



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.