[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] RE: Power aware credit scheduler



>From: Emmanuel Ackaouy [mailto:ackaouy@xxxxxxxxx] 
>Sent: 2008年6月19日 22:38
>
>On Jun 19, 2008, at 15:32 , Tian, Kevin wrote:
>>> Regardless of any new knobs, a good default behavior might be
>>> to only take a package out of C-state when another non-idle
>>> package has had more than one VCPU active on it over some
>>> reasonable amount of time.
>>>
>>> By default, putting multiple VCPUs on the same physical package
>>> when other packages are idle is obviously not always going to
>>> be optimal. Maybe it's not a bad default for VCPUs that are
>>> related (same VM or qemu)? I think Ian P hinted at this. But it
>>> frightens me that you would always do this by default for any set
>>> of VCPUs. Power saving is good but so is memory bandwidth
>>
>> To enable this feature depends on a control command from system
>> adminstrator, who knows the tradeoff. From absolute performance
>> P.O.V, I believe it's not optimal. However if looking from the
>> performance/watt, i.e. power efficiency angle, power saving due to
>> package level idle may overwhelm performance impact by keeping
>> activity in other package. Of course finally memory latency should
>> be also considered in NUMA system, as you mentioned.
>
>I'm saying something can be done to improve power saving in
>the current system without adding a knob. Perhaps you can give
>the admin even more power saving abilities with a knob, but it
>makes sense to save power when performance is not impacted,
>regardless of any knob position.

Then I agree. It's always good to have one improved with the other
immune, or fix some hindering both first. Then we'll also compare 
whether a knob can shoot for obviously better result.

>
>Also, note I mentioned memory BANDWIDTH and not latency.
>It's not the same thing. And I wasn't just thinking about NUMA
>systems.
>

Thanks for pointing out. I misread fast. But I'm not sure how memory
bandwidth is affected by the vcpu scheduling. Do you mean more mem
traffic involved in bus due to shared cache contention when multiple 
vcpus are running in same package? It then may be workload specific
and others may not be affected to same extent. But this is good hint
that we'll keep such workload in experiment when doing the change.
Also consider vcpu/domain relationship is one thing we can try. The
basic direction will be first go simple to see the effect.

Thanks,
Kevin

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.