[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2] Modified RTDS scheduler to use an event-driven model instead of polling.



> I think you might assume that the first M VCPUs  in the runq are the
> current running VCPUs on the M pCPUs. Am I correct? (From what you
> described in the following example, I think I'm correct. ;-) )

This is an assumption he gathered from my implementation, because I
wrote code that uses this "fact." Right now the code is wrong since
current vcpus are not kept in the queue.

> I tell that you make the above assumption from here.
>
> However, in the current implementation, runq does not hold the current
> running VCPUs on the pCPUs. We remove the vcpu from runq in
> rt_schedule() function. What you described above make perfect sense
> "if" we decide to make runq hold the current running VCPUs.
>
> Actually, after thinking about the example you described, I think we
> can hold the current running VCPUs *and* the current idle pCPUs in the
> scheduler-wide structure; In other words, we can have another runningq
> (not runq) and a idle_pcpu list in the rt_private; Now all VCPUs are
> stored in three queues: runningq, runq, and depletedq, in increasing
> priority order.
>
> When we make the tickle decision, we only need to scan the idle_pcpu
> and then runningq to figure out which pCPU to tickle. All of other
> design you describe still hold here, except that the position where a
> VCPU is inserted into runq cannot directly give us which pCPU to
> tickle. What do you think?

This is an good idea too. I think having the running vcpus simply at the
beginning of the runq could work, too.

>> In case (b) there may be idle pCPUs (and, if that's the case, we
>> should tickle one of them, of course) or not. If not, we need to go
>> figure out which pCPU to tickle, which is exactly what runq_tickle()
>> does, but we at least know for sure that we want to tickle the pCPU
>> where vCPU k runs, or others where vCPUs with deadline greater than vCPU
>> k run.
>>
>> Does this make sense?
>
> Yes, if we decide to hold the currently running VCPUs in
> scheduler-wide structure: it can be runq or runningq.

Right. I think for now I'll just keep them in runq to keep most of the
selection logic the same.

> Thank you again! It is very clear and I'm clear which part is unclear now. :-D
>
>>
>> Dagaen, Meng, any question?
>>
>> I really think that, if we manage to implement all this, code quality
>> and performance would improve a lot. Oh, considering all the various and
>> (logically) different changes that I've hinted at, the patch needs to
>> become a patch series! :-D
>
> Sure! Dagaen, what do you think?

Yes, this may become a series with several changes like this. For now
I am going to get it working with the running vcpus in the runq.
I thought returning the inserted index was a good way of checking if
we need to tickle, and moving the runnings vpus to runq will make
the scheduler almost done as far as functional correctness goes.
Various other features hinted at would be a series on top of this.

Regards,
~Dagaen Golomb

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.