[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 12 of 18] x86/mm: Make page_lock/unlock() in arch/x86/mm.c externally callable



> At 02:47 -0500 on 08 Dec (1323312447), Andres Lagar-Cavilla wrote:
>> This is necessary for a new consumer of page_lock/unlock to follow in
>> the series.
>>
>> Signed-off-by: Andres Lagar-Cavilla <andres@xxxxxxxxxxxxxxxx>
>
> Nak, I'm afraid.
>
> These were OK as local functions but if they're going to be made
> generally visible, they need clear comments describing what this
> locking protects and what the discipline is for avoiding deadlocks.

How about something along the lines of
"page_lock() is used for two purposes: pte serialization, and memory
sharing. All users of page lock for pte serialization live in mm.c, use it
to lock a page table page during pte updates, do not take other locks
within the critical section delimited by page_lock/unlock, and perform no
nesting. All users of page lock for memory sharing live in
mm/mem_sharing.c. For memory sharing, nesting may happen when sharing (and
locking) two pages -- deadlock is avoided by locking pages in increasing
order. Memory sharing may take the p2m_lock within a page_lock/unlock
critical section. These two users (pte serialization and memory sharing)
should never collide, as long as page table pages are properly unshared
prior to updating."

Now those are all pretty words, but here are the two things I (think I)
need to do:
- Prior to updating pte's, we do get_gfn on the page table page. We should
be using get_gfn_unshare. Regardless of this patch. It's likely never
going to trigger an actual unshare, yet better safe than sorry.
- I can wrap uses of page_lock in mm sharing in an "external"
order-enforcing construct from mm-locks.h. And use that to scream deadlock
between page_lock and p2m_lock.

The code that actually uses page_lock()s in the memory sharing code can be
found in "[PATCH] x86/mm: Eliminate global lock in mem sharing code". It
already orders locking of individual pages in ascending order.

Andres

>
> Perhaps Jan or Keir can supply appropriate words.  The locking was
> introduce in this cset:
>
>     changeset:   17846:09dd5999401b
>     user:        Keir Fraser <keir.fraser@xxxxxxxxxx>
>     date:        Thu Jun 12 18:14:00 2008 +0100
>     files:       xen/arch/x86/domain.c xen/arch/x86/domain_build.c
>     xen/arch/x86/mm.c
>     description:
>     x86: remove use of per-domain lock from page table entry handling
>
>     This change results in a 5% performance improvement for kernel builds
>     on dual-socket quad-core systems (which is what I used for reference
>     for both 32- and 64-bit). Along with that, the amount of time reported
>     as spent in the kernel gets reduced by almost 25% (the fraction of
>     time spent in the kernel is generally reported significantly higher
>     under Xen than with a native kernel).
>
>     Signed-off-by: Jan Beulich <jbeulich@xxxxxxxxxx>
>     Signed-off-by: Keir Fraser <keir.fraser@xxxxxxxxxx>
>
> Tim.
>



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.