[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH 3 of 5] Rework locking in the PoD layer
> At 14:56 -0500 on 01 Feb (1328108167), Andres Lagar-Cavilla wrote: >> xen/arch/x86/mm/mm-locks.h | 10 ++++ >> xen/arch/x86/mm/p2m-pod.c | 112 >> ++++++++++++++++++++++++++------------------ >> xen/arch/x86/mm/p2m-pt.c | 1 + >> xen/arch/x86/mm/p2m.c | 8 ++- >> xen/include/asm-x86/p2m.h | 27 +++------- >> 5 files changed, 93 insertions(+), 65 deletions(-) >> >> >> The PoD layer has a comples locking discipline. It relies on the >> p2m being globally locked, and it also relies on the page alloc >> lock to protect some of its data structures. Replace this all by an >> explicit pod lock: per p2m, order enforced. >> >> Three consequences: >> - Critical sections in the pod code protected by the page alloc >> lock are now reduced to modifications of the domain page list. >> - When the p2m lock becomes fine-grained, there are no >> assumptions broken in the PoD layer. >> - The locking is easier to understand. >> >> Signed-off-by: Andres Lagar-Cavilla <andres@xxxxxxxxxxxxxxxx> > > This needs an Ack from George, too. Also: > >> @@ -922,6 +929,12 @@ p2m_pod_emergency_sweep(struct p2m_domai >> limit = (start > POD_SWEEP_LIMIT) ? (start - POD_SWEEP_LIMIT) : 0; >> >> /* FIXME: Figure out how to avoid superpages */ >> + /* NOTE: Promote to globally locking the p2m. This will get >> complicated >> + * in a fine-grained scenario. Even if we're to lock each gfn >> + * individually we must be careful about recursion limits and >> + * POD_SWEEP_STRIDE. This is why we don't enforce deadlock >> constraints >> + * between p2m and pod locks */ >> + p2m_lock(p2m); > > That's a scary comment. It looks to me as if the mm-locks.h mechanism > _does_ enforce those constraints - am I missing something? The problem is that the recurse count of a spinlock is not particularly wide. So if you have a loop that does a lot of nested get_gfn*, you may overflow. The funny bit is that we do enforce ordering, so that part of the comment is stale. Will update. Andres > > Cheers, > > Tim. > _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |