[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH RFC V4 5/5] Documentation/kvm : Add documentation on Hypercalls and features used for PV spinlock



* Marcelo Tosatti <mtosatti@xxxxxxxxxx> [2012-01-17 13:53:03]:

> on tue, jan 17, 2012 at 05:32:33pm +0200, gleb natapov wrote:
> > on tue, jan 17, 2012 at 07:58:18pm +0530, srivatsa vaddagiri wrote:
> > > * gleb natapov <gleb@xxxxxxxxxx> [2012-01-17 15:20:51]:
> > > 
> > > > > having the hypercall makes the intent of vcpu (to sleep on a kick) 
> > > > > clear to 
> > > > > hypervisor vs assuming that because of a trapped hlt instruction 
> > > > > (which
> > > > > will anyway won't work when yield_on_hlt=0).
> > > > > 
> > > > the purpose of yield_on_hlt=0 is to allow vcpu to occupy cpu for the
> > > > entire time slice no mater what. i do not think disabling yield on hlt
> > > > is even make sense in cpu oversubscribe scenario.
> > > 
> > > Yes, so is there any real use for yield_on_hlt=0? I believe Anthony
> > > initially added it as a way to implement CPU bandwidth capping for VMs,
> > > which would ensure that busy VMs don't eat into cycles meant for a idle
> > > VM. Now that we have proper support in scheduler for CPU bandwidth 
> > > capping, is 
> > > there any real world use for yield_on_hlt=0? If not, deprecate it?
> > > 
> > I was against adding it in the first place, so if IBM no longer needs it
> > I am for removing it ASAP.
> 
> +1. 
> 
> Anthony?

CCing Anthony.

Anthony, could you ACK removal of yield_on_hlt (as keeping it around will
require unnecessary complications in pv-spinlock patches)?

- vatsa


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.