[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH RFC V4 0/5] kvm : Paravirt-spinlock support for KVM guests



On 01/17/2012 06:00 AM, Jeremy Fitzhardinge wrote:
On 01/16/2012 09:24 PM, Alexander Graf wrote:
This is true in case you're spinning. If on overcommit spinlocks would
instead of spin just yield(), we wouldn't have any vcpu running that's
just waiting for a late ticket.

Yes, but the reality is that most spinlocks are held for a short period
of time and there's a low likelihood of being preempted while within a
spinlock critical section.  Therefore if someone else tries to get the
spinlock and there's contention, it's always worth spinning for a little
while because the lock will likely become free soon.

I too believe that lock-holder-preemption forms small part of the problem. Non - PLE machine results seem to support that for the patch.


At least that's the case if the lock has low contention (shallow queue
depth and not in slow state).  Again, maybe it makes sense to never spin
for deep queues or already slowstate locks.

We still have an issue finding the point in time when a vcpu could run again, 
which is what this whole series is about. My point above was that instead of 
doing a count loop, we could just do the normal spin dance and set the 
threshold to when we enable the magic to have another spin lock notify us in 
the CPU. That way we

   * don't change the uncontended case

I don't follow you.  What do you mean by "the normal spin dance"?  What
do you mean by "have another spinlock notify us in the CPU"?  Don't
change which uncontended case?  Do you mean in the locking path?  Or the
unlock path?  Or both?

   * can set the threshold on the host, which knows how contended the system is

Hm, I'm not convinced that knowing how contended the system is is all
that useful overall.  What's important is how contended a particular
lock is, and what state the current holder is in.  If it's not currently
running, then knowing the overall system contention would give you some
idea about how long you need to wait for it to be rescheduled, but
that's getting pretty indirect.

I think the "slowpath if preempted while spinning" idea I mentioned in
the other mail is probably worth following up, since that give specific
actionable information to the guest from the hypervisor.  But lots of
caveats.

[[
A possible mechanism:

   * register ranges of [er]ips with the hypervisor
   * each range is paired with a "resched handler block"
   * if vcpu is preempted within such a range, make sure it is
     rescheduled in the resched handler block

This is obviously akin to the exception mechanism, but it is partially
implemented by the hypervisor.  It allows the spinlock code to be
unchanged from native, but make use of a resched rather than an explicit
counter to determine when to slowpath the lock.  And it's a nice general
mechanism that could be potentially useful elsewhere.

Unfortunately, it doesn't change the unlock path at all; it still needs
to explicitly test if a VCPU needs to be kicked on unlock.
]]


And since we control what spin locks look like, we can for example always keep 
the pointer to it in a specific register so that we can handle 
pv_lock_ops.lock_spinning() inside there and fetch all the information we need 
from our pt_regs.

You've left a pile of parts of an idea lying around, but I'm not sure
what shape you intend it to be.

Interesting option But, Is it a feasible option to have specific
registers ? Considering we can have nested locks [ which means we need
a table ] and also considering the fact that "normal" spinlock acquisition path should have less overhead.


     J




_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.