|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v13 09/11] pvqspinlock, x86: Add para-virtualization support
On 11/03/2014 05:35 AM, Peter Zijlstra wrote: On Wed, Oct 29, 2014 at 04:19:09PM -0400, Waiman Long wrote:arch/x86/include/asm/pvqspinlock.h | 411 +++++++++++++++++++++++++++++++++I do wonder why all this needs to live in x86.. I haven't looked into the para-virtualization code in other architectures to see if my PV code is equally applicable there. That is why I put it under the x86 directory. If other architectures decide to use qspinlock with paravirtualization, we may need to pull out some common code, if any, back to kernel/locking.
It is the same reason that you made them PV ops in your patch. Even when PV is on, the code won't need to call any of the PV ops most of the time. So it is just a matter of optimizing the most common case at the expense of performance in the rare case that the CPU need to be halt and woken up which will be bad performance-wise anyway However, if you think they should be regular function pointers, I am fine with that too. +/* + * Queue Spinlock Para-Virtualization (PV) Support + * + * The PV support code for queue spinlock is roughly the same as that + * of the ticket spinlock.Relative comments are bad, esp. since we'll make the ticket code go away if this works, at which point this is a reference into a black hole. Thank for the suggestion, I will remove that when I need to revise the patch.
Will do that.
I did that because in all the architectures except s390, the lock functions are not inlined. They live in the _raw_spin_lock* defined in kernel/locking/spinlock.c. The unlock functions, on the other hand, are all inlined except when PV spinlock is enabled. So adding a check for the jump label won't change any of the status quo. +extern void queue_spin_unlock_slowpath(struct qspinlock *lock); + /** * queue_spin_unlock - release a queue spinlock * @lock : Pointer to queue spinlock structure * * An effective smp_store_release() on the least-significant byte. + * + * Inlining of the unlock function is disabled when CONFIG_PARAVIRT_SPINLOCKS + * is defined. So _raw_spin_unlock() will be the only call site that will + * have to be patched.again if you hard rely on the not inlining make a build fail not a comment. Will do that.
As said in my previous emails, the PV ops call site patching code doesn't work well with locking. First of all, the native_patch() function was called even in a KVM PV guest. Some unlock calls happened before the paravirt_spinlocks_enabled jump label was set up. It occurs to me that call site patching is done the first time the call site is used. At least for those early unlock calls, there is no way to figure out if it should use the native fast path or the PV slow path. The only possible workaround that I can think of is to use a variable (if available) that signal the end of the bootup init phase, we can then defer the call site patching until the init phase has passed. This is a rather complicated solution which may not worth the slight benefit of a faster unlock call in the native case.
In my patch, the two pv_*() calls above serve as replacements of the waiting code. Making them return void and reduce to NOPs will cause what Konrad said doing the same operation twice which is not ideal from a performance point of view for the PV version. Is putting the pre-PV code in the comment help to clarify what the code should be before the PV stuff? -Longman _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |