[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] credit based scheduler


I agree with your analysis. As I think more about this, affinity based
scheduling in the guest would run into trouble even without changing the
mapping between vcpu and pcpu! (just running other domains on the pcpu
may invalidate the cache state). We may need to look at how best to
preserve the cache state for domains that need it. Some form of
exclusive VCPU pinning may be the answer here (the VCPU will run on the
pcpu it is pinned on and furthermore the pcpu will only run the vcpu
that has the exclusive binding).  It might also be useful to notify the
guest when vcpu-pcpu binding changes (via the shared page).


K. Y

>>> On Wed, Jun 21, 2006 at  2:13 PM, in message
<20060621181356.GA20321@xxxxxxxxxxxxxxxxxxxxxxxxxxxx>, Emmanuel
<ack@xxxxxxxxxxxxx> wrote: 
> On Wed, Jun 21, 2006 at 11:41:14AM - 0600, Ky Srinivasan wrote:
>> with sedf. Have you looked at the implication of  load balancing in
>> hypervisor on scheduling policies implemented in the guest os. For
>> instance if the guest is implementing  CPU affinity and as part of
>> load balancing we decide to change the mapping between the vcpu and
>> physical cpu in the hypervisor, the scheduling decisions taken in
>> guest would be bogus. 
> True. There is a tradeoff between keeping a VCPU waiting to
> run on a particular physical CPU and running it elsewhere
> right away.
> I think the only case we would prefer not to move a waiting
> VCPU from the physical CPU it last ran on to an idle one is:
> The VCPU very recently stopped running on said CPU, and
> It has warmed its cache considerably, and
> Before the VCPU gets to run on the CPU again,
>     Very little will have run on the CPU, and
>     The cache will not have been significantly blown.
> Basically, this says: It's bad to move a VCPU if it has a
> physical CPU pretty much to itself.
> But if a VCPU has a PCPU pretty much to itself, it's very
> unlikely it will end up sitting on that PCPU's runq long
> enough to be picked up by another PCPU.
> I think the simple thing to do here and a good rule of thumb
> in general is not to allow cycles to go idle when there is
> runnable work. If you can think of a counter example though,
> I'd love to consider it and perhaps make some changes.

Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.