[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v7]xen: sched: convert RTDS from time to event driven model
On Thu, Mar 10, 2016 at 5:38 AM, Dario Faggioli <dario.faggioli@xxxxxxxxxx> wrote: > On Wed, 2016-03-09 at 23:00 -0500, Meng Xu wrote: >> On Wed, Mar 9, 2016 at 10:46 AM, Dario Faggioli >> <dario.faggioli@xxxxxxxxxx> wrote: >> > >> > Basically, by doing all the replenishments (which includes updating >> > all >> > the deadlines) upfront, we should be able to prevent the above >> > situation. >> > >> > So, again, thoughts? >> I think we need to decide which vcpu is on depleted q before update >> deadline and we also need to record which vcpus should be updated. >> > I think you are right about the need of a depleted flag, or something > that will have the same effect (but I really do like using a flag for > that! :-D ). > > I don't think we really need to count anything. In fact, what I had in > mind and tried to put down in pseudocode is that we traverse the list > of replenishment events twice. During the first traversal, we do not > remove the elements that we replenish (i.e., the ones that we call > rt_update_deadline() on). Therefore, we can just do the second > traversal, find them all in there, handle the tickling, and --in this > case-- remove and re-insert them. Wouldn't this work? My concern is that: Once we run rt_update_deadline() in the first traversal of the list, we have updated the cur_deadline and cur_budget already. Since the replenish queue is sorted by the cur_deadline, how can we know which vcpu has been updated in the first traversal and need to be reinsert? We don't have to traverse the whole replq to reinsert all vcpus since some of them haven't been replenished yet. If we directly remove and reinsert, we have to change the replq_reinsert() to provide a return value if the location of the to-be-inserted vcpu changes, which probably complicates the logic of the replq_reinsert(), IMO. If we wan to avoid the counting, we can add a flag like #define __RTDS_delayed_reinsert_replq 4 #define RTDS_delayed_reinsert_replq (1<< __RTDS_delayed_reinsert_replq) so that we know when we should stop at the second traversal. > >> So >> I added some code into your code: >> >> #define __RTDS_is_depleted 3 >> #define RTDS_is_depleted (1<<__RTDS_is_depleted) >> > As said, I like this. However... > >> int num_repl_vcpus = 0; >> for_each_to_be_replenished_vcpu(v) >> { >> if (v->cur_budget <= 0) >> set_bit(__RTDS_is_depleted, &v->flags); >> > ... I think we can do this in burn_budget(), where we have this check > in place already. Agree. :-) > >> rt_update_deadline(v); >> num_repl_vcpus++; >> } >> >> for_each_to_be_replenished_vcpu(v) >> { >> deadline_queue_remove(replq, v); >> >> if ( curr_on_cpu(v->processor) == v)) //running >> { >> if ( v->cur_deadline >= runq[0]->cur_deadline ) >> tickle(runq[0]); /* runq[0] => first vcpu in the runq */ >> } >> else if ( __vcpu_on_q(v) ) >> { >> if ( v->flags & RTDS_is_depleted ) //depleted >> { >> clear_bit(__RTDS_is_depleted, &v->flags); >> > if ( test_and_clear(xxx) ) > > Or __test_and_clear(xxx). > Probably this one: test_and_clear_bit() which is used in test and clear the __RTDS_delayed_runq_add bit >> tickle(v); >> } >> else //runnable >> /* do nothing */ >> } >> deadline_queue_insert(v, replq); >> >> /* stop at the vcpu that does not need replenishment */ >> num_repl_vcpus--; >> if (!num_repl_vcpus) >> break; >> > If we really need to record/mark this, I want to think at how that > would be best done, as I'm not a fan of this counting... But hopefully, > we just don't need to do anything like that, do we? I think we need to know when to stop at the second travesal, unless I'm wrong in the above. :-) I would prefer to using the flag, because it looks like much clearer than counting? Thanks and Best Regards, Meng ----------- Meng Xu PhD Student in Computer and Information Science University of Pennsylvania http://www.cis.upenn.edu/~mengxu/ _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |