[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Linux spin lock enhancement on xen


  • To: Keir Fraser <keir.fraser@xxxxxxxxxxxxx>
  • From: George Dunlap <dunlapg@xxxxxxxxx>
  • Date: Tue, 24 Aug 2010 09:43:34 +0100
  • Cc: Jeremy Fitzhardinge <jeremy@xxxxxxxx>, "Xen-devel@xxxxxxxxxxxxxxxxxxx" <Xen-devel@xxxxxxxxxxxxxxxxxxx>
  • Delivery-date: Tue, 24 Aug 2010 01:44:31 -0700
  • Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:sender:in-reply-to:references:date :x-google-sender-auth:message-id:subject:from:to:cc:content-type; b=UhfWVU0J1g6ZhnJQs3u8PG77iz02Ym0byPKiEUYExuBHaw5oSCBXKTYCNyJz+LV0HQ 8GJz4SB+IWMwkt8/s8V5q1BLq7bcYLYGwVF2WRbwHm167JCRDLq131MrbmKie18B+08e 3krkiUhSqkEiR0Xi1LsPSA7oYlkbVIOIzx8L8=
  • List-id: Xen developer discussion <xen-devel.lists.xensource.com>

On Tue, Aug 24, 2010 at 9:20 AM, Keir Fraser <keir.fraser@xxxxxxxxxxxxx> wrote:
> I think there's a difference between providing some kind of yield_to as a
> private interafce within the hypervisor as some kind of heuristic for
> emulating something like PAUSE, versus providing such an operation as a
> public guest interface.

I agree that any "yield_to" should be strictly a hint, and not a
guarantee by the HV.  If that's the case, I don't actually see a
difference between a malicous guest knowing that "yield_to" happens to
behave a certain way, and a malicious guest knowing that "PAUSE"
behaves a certain way.

> It seems to me that Jeremy's spinlock implementation provides all the info a
> scheduler would require: vcpus trying to acquire a lock are blocked, the
> lock holder wakes just the next vcpu in turn when it releases the lock. The
> scheduler at that point may have a decision to make as to whether to run the
> lock releaser, or the new lock holder, or both, but how can the guest help
> with that when its a system-wide scheduling decision? Obviously the guest
> would presumably like all its runnable vcpus to run all of the time!

I think that makes sense, but leaves out one important factor: that
the credit scheduler, as it is, is essentially round-robin within a
priority; and round-robin schedulers are known to discriminate against
vcpus that yield in favor of those that burn up their whole timeslice.
 I think it makes sense to give yielding guests a bit of an advantage
to compensate for that.

That said, this whole thing needs measurement: any yield_to
implementation would need to show that:
* The performance is significantly better than either Jeremy's
patches, or simple yield (with, perhaps, boost-peers, as Xiantao
suggests)
* It does not give a spin-locking workload a cpu advantage over other
workloads, such as specjbb (cpu-bound) or scp (very
latency-sensitive).

 -George

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.