[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [Xen-devel] cpuidle causing Dom0 soft lockups



>>>> Keir Fraser <keir.fraser@xxxxxxxxxxxxx> 15.02.10 18:33 >>>
>>Attached is a better version of your patch (I think). I haven't applied it
>>because I don't see why the ASSERT() in sched_credit.c is correct. How do
>>you know for sure that !v->is_urgent there (and therefore avoid
>urgent_count
>>manipulation)?
>
>Two remarks: For one, your patch doesn't consider vCPU-s with event
>delivery disabled urgent anymore. 

Oh, sorry that I made this change without telling the reason. When vCPU is 
blocked with event delivery disabled, it is either guest CPU offline or guest 
CPU polling on event channel. Offlined guest CPU should not be treated as 
urgent vCPU, so we only need to track the event channel polling case. this is 
the reason why I simplify the logic to only treat vCPU polling on event channel 
as urgent vCPU.

>Second, here
>
>>+    /*
>>+     * Transfer urgency status to new CPU before switching CPUs, as once
>>+     * the switch occurs, v->is_urgent is no longer protected by the
>per-CPU
>>+     * scheduler lock we are holding.
>>+     */
>>+    if ( unlikely(v->is_urgent) )
>>+    {
>>+        atomic_dec(&per_cpu(schedule_data, old_cpu).urgent_count);
>>+        atomic_inc(&per_cpu(schedule_data, new_cpu).urgent_count);
>>+    }
>
>I would think we should either avoid the atomic ops altogether if
>old_cpu == new_cpu, or switch the updating order (inc before dec).

Do you mean when old_cpu == new_cpu, and if urgent_count == 1, current approach 
(dec before inc) has small time window (after dec, before inc) that 
urgent_count==0, thus may mislead couidle driver. if this is the case, I am 
fine with it and prefer to switching the updating order.

Regards
Ke

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.