[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2] Modified RTDS scheduler to use an event-driven model instead of polling.



>>> > There is an added timer that only handles replenishments, which is
>>> > called at the time the next
>>> > replenishment will occur. To do this, we now also keep the depletedq
>>> > sorted. If it detects it has
>>> > moved a vCPU into the first [# CPUs] slots in the runq, it tickles the
>>> > runq with the added vCPU.
>>> > If the depletedq becomes empty the timer is stopped, and if the
>>> > scheduler moves a vCPU into
>>> > a previously empty depletedq it restarts the timer.
>>> >
>>> > This may have some issues with corner cases that were discussed
>>> > earlier, such as unexpected
>>> > behavior if the two timers are armed for the same time. It should be
>>> > correct for the common case.
>>>
>>> Could you elaborate more about when two timers can be armed for the same
>>> time?
>
> I don't think the scenario you described below will happen. Here is my 
> argument:

It does take some thinking as to whether this will occur or not. I
also am not sure
if Xen will let the timer handlers preempt each other and, if so, if
this is even a
problem.

>>
>> Since the two timers are independent now, if a task on the depletedq has
>> deadline at time X (so the replenishment timer will run) and another task on
>> a CPU runs out of budget at time X (so scheduler should run), its not clear
>> what will happen. If the replenishment goes first it probably isn't a big
>> deal.
>
> OK.
>
>> However, if rt_schedule goes first it may kick a vcpu that is about to
>> get a replenishment that would cause it to remain a top priority.
>
> So a VCPU i in the runq kicks the currently running VCPU j that is
> about to get a replenishment. Right?
> It means that the cur_deadline of VCPU is is less than the
> cur_deadline of VCPU j; otherwise, VCPU i won't preempt VCPU j.
>
> When VCPU j get replenishiment, it means the deadline of VCPU j will
> be added by at least one period. Since the cur_deadline of VCPU i is
> already smaller than the cur_deadline of VCPU j, VCPU i will still
> have higher priority than VCPU j even when VCPU j get budget
> replenishment in the near future.
>
> Therefore, this scenario won't happen. :-)
> Do you have another scenario in mind? :-P

In this scenario you are correct.
I was thinking more about a budget depletion. The kicked vCPU may be
because it has run out of budget. Imagine a vCPU competing its work just
before its deadline, and imagine it has a very high priority. It may finish
its work at time X, and then some other vCPU may have a replenishment
scheduled for time X. Now, if rt_schedule goes first it will kick it to the
depletedq, then it will get replenished, then it will get rescheduled since
it is high priority and should be scheduled again. We just swapped out and
back in a vCPU that should have just stayed there. It does appear it would
still be functionally correct, though. Replenishment first also shouldn't be
a problem since it performs replenishment on both queues and then the
current vCPU wouldn't be swapped out.

> The scenario in my mind that will potentially invoke one more rt_schedule  is:
> VCPU j currently runs out of budget and will have top priority once it
> get budget replenishment.
> If replenishment runs first, rt_schedule will be invoked for only once.
> If rt_schedule runs first and schedule a VCPU to run, rt_schedule will
> be invoked again when replenishment is invoked.

This is a good point here. The ordering in this case doesn't seem to cause
any functional/behavior problems, but it will cause rt_schedule to run twice
when it could have been ran once. So, even as a corner case, it would seem
that its a performance corner case and not a behavior one.

~Dagaen Golomb

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.