[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2] xen/sched: rework and rename vcpu_force_reschedule()



On 16.09.2019 14:49, Juergen Gross wrote:
> On 16.09.19 11:20, Jan Beulich wrote:
>> On 14.09.2019 08:42, Juergen Gross wrote:
>>> vcpu_force_reschedule() is only used for modifying the periodic timer
>>> of a vcpu. Forcing a vcpu to give up the physical cpu for that purpose
>>> is kind of brutal.
>>>
>>> So instead of doing the reschedule dance just operate on the timer
>>> directly. By protecting periodic timer modifications against concurrent
>>> timer activation via a per-vcpu lock it is even no longer required to
>>> bother the target vcpu at all for updating its timer.
>>>
>>> Rename the function to vcpu_set_periodic_timer() as this now reflects
>>> the functionality.
>>>
>>> Signed-off-by: Juergen Gross <jgross@xxxxxxxx>
>>
>> I continue to be unhappy about there being no word at all about ...
>>
>>> @@ -724,24 +725,6 @@ static void vcpu_migrate_finish(struct vcpu *v)
>>>       vcpu_wake(v);
>>>   }
>>>   
>>> -/*
>>> - * Force a VCPU through a deschedule/reschedule path.
>>> - * For example, using this when setting the periodic timer period means 
>>> that
>>> - * most periodic-timer state need only be touched from within the scheduler
>>> - * which can thus be done without need for synchronisation.
>>> - */
>>> -void vcpu_force_reschedule(struct vcpu *v)
>>
>> ... the originally intended synchronization-free handling. Forcing
>> the vCPU through the scheduler may seem harsh (and quite some
>> overhead), yes, but I don't think the above was written (and
>> decided) without consideration. One effect of this can be seen by
>> you ...
>>
>>> +void vcpu_set_periodic_timer(struct vcpu *v, s_time_t value)
>>> +{
>>> +    spin_lock(&v->periodic_timer_lock);
>>> +
>>> +    stop_timer(&v->periodic_timer);
>>
>> ... introducing a new stop_timer() here, i.e. which doesn't replace
>> any existing one. The implication is that other than before the
>> periodic timer may now not run (for a brief moment) despite it
>> being supposed to run - after all it has been active so far
>> whenever a vCPU was running.
>>
>> Then again, looking at the involved code paths yet again, I wonder
>> whether this has been working right at all: There's an early exit
>> from schedule() when prev == next, which bypasses
>> vcpu_periodic_timer_work(). And I can't seem to be able to spot
>> anything on the vcpu_force_reschedule() path which would guarantee
>> this shortcut to not be taken.
> 
> First, the current "synchronization-free" handling is not existing. The
> synchronization is just hidden in the calls of vcpu_migrate_*() and it
> is done via the scheduler lock.

Sure, but the scheduler lock needs to be taken during scheduling
of the vCPU anyway. There was no "extra" synchronization involved.

> Yes, I'm adding a stop_timer(), but the related stop_timer() call in
> the old code was in schedule(). So statically you are right, but
> dynamically there is no new stop_timer() call involved.

I did specifically check that my comment is not just about the
"static" part (as you call it). As said - there was no stop_timer()
before behind a running vCPU's back. This is what worries me.

> And last: the case prev == next would not occur today, as the migrate
> flag being set in vcpu->pause_flags would cause the vcpu to be taken
> away from the cpu. So it is working today, but setting the periodic
> timer requires two scheduling events in case the target vcpu is
> currently running.

I'm not going to claim I fully understood the code when looking at
it in the morning, but I couldn't find the place(s) guaranteeing
that by the time the migration of the vCPU is over it wouldn't be
runnable again right away, and hence potentially re-chosen as the
vCPU to run on the pCPU is was running on before.

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.