[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [Xen-devel] cpuidle causing Dom0 soft lockups



>From: Jan Beulich [mailto:JBeulich@xxxxxxxxxx] 
>Sent: 2010年2月8日 17:08
>
>>>> "Yu, Ke" <ke.yu@xxxxxxxxx> 07.02.10 16:36 >>>
>>The attached is the updated patch, it has two changes
>>- change the logic from local irq disabled *and* poll event 
>to local irq disabled *or* poll event 
>
>Thanks.
>
>>- Use per-CPU vcpu list to iterate the VCPU, which is more 
>scalable. The original scheduler does not provide such kind of 
>list, so this patch implement the list in scheduler code.
>
>I'm still not really happy with that solution. I'd rather say that e.g.
>vcpu_sleep_nosync() should set a flag in the vcpu structure indicating
>whether that one is "urgent", and the scheduler should just maintain
>a counter of "urgent" vCPU-s per pCPU. Setting the flag when a vCPU
>is put to sleep guarantees that it won't be mis-treated if it got woken
>by the time acpi_processor_idle() looks at it (or at least the window
>would be minimal - not sure if it can be eliminated completely). Plus
>not having to traverse a list is certainly better for 
>scalability, not the
>least since you're traversing a list (necessarily) including sleeping
>vCPU-s (i.e. the ones that shouldn't affect the performance/
>responsiveness of the system).
>
>But in the end it would certainly depend much more on Keir's view on
>it than on mine...
>

Yes, that's good point. Actually it's the 1st choice when Ke tried
to implement it, but gave up later due to failure to maintain counter
at approriate entry/exit points. Introduce a new 'urgent' flag would
make it easier. Another reason to use per-CPU vcpu-list is in view
of reusability in other scenarios. But it looks not strong now.

Anyway, a new patch per your suggesstion is in progess now.

Thanks,
Kevin
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.