[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [Xen-devel] cpuidle causing Dom0 soft lockups



Kevin has explained details on your comments, so I just pick some point for 
more explanation.

>
>I would not think that dealing with the xtime_lock scalability issue in
>timer_interrupt() should be *that* difficult. In particular it should be
>possibly to assign an on-duty CPU (permanent or on a round-robin
>basis) that deals with updating jiffies/wallclock, and all other CPUs
>just update their local clocks. I had thought about this before, but
>never found a strong need to experiment with that.
>
>Jan

This  is good. Eliminating global lock is always good practice for scalability, 
especially that there will be more and more CPUs in the future. I would expect 
this to be the best solution to the softlockup issue.

And If the global xtime_lock can be eliminated, the cpuidle patch may not be 
needed anymore.

>>Could you please try the attached patch. this patch try to avoid entering
>deep C state when there is vCPU local irq disabled, and polling event channel.
>When tested in my 64 CPU box, this issue is gone with this patch.
>
>We could try it, but I'm not convinced of the approach. Why is the
>urgent determination dependent upon event delivery being disabled
>on the respective vCPU? If at all, it should imo be polling *or* event
>delivery disabled, not *and*.

Then rationale of this patch is: disabling vCPU local irq usually means vCPU 
have urgent task to finish ASAP, and don't want to be interrupted. 

As first step patch, I am a bit conservative to combine them by *and*. Once it 
is verified working, I can extend this hint to *or *, as long as the *or*does 
not include unwanted case that hurt the power saving significantly.

>
>Also, iterating over all vCPU-s in that function doesn't seem very
>scalable. It would seem more reasonable for the scheduler to track
>how many "urgent" vCPU-s a pCPU currently has.

Yes, we can do this optimization. Current patch is just for quick verification 
purpose.

Regards
Ke

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.