[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [Patch 2/2]xen/sched_credit2.c : Runqueue per core
Thank you :-) I will work on the things you mentioned and resend the patch. It's great to work on patches. I was trying to figure out how to change the code so it looks neat and now I have the answer. Thank you. :-) I will summarize the performance in cover patch. Regards, Uma Sharma On Mon, Mar 9, 2015 at 6:33 PM, Dario Faggioli <dario.faggioli@xxxxxxxxxx> wrote: > On Mon, 2015-03-09 at 12:18 +0000, George Dunlap wrote: >> On Mon, Mar 9, 2015 at 8:55 AM, Uma Sharma <uma.sharma523@xxxxxxxxx> wrote: > >> > --- a/xen/common/sched_credit2.c >> > +++ b/xen/common/sched_credit2.c > >> > @@ -1935,15 +1938,36 @@ static void init_pcpu(const struct scheduler *ops, >> > int cpu) >> > return; >> > } >> > >> > + /*Figure out which type of runqueue are to be created */ >> > + if (!strcmp(opt_credit2_runquque, "socket")) { >> > + rq = 's'; >> > + } else if (!strcmp(opt_credit2_runquque, "core")) { >> > + rq = 'c'; >> > + } else { >> > + rq = 's'; >> > + } >> > >> It would be more typical, rather than have this be a char resolving to >> 's' and 'c', to have it be an int, and have the values be #defines; >> for example, "CREDIT2_OPT_RUNQUEUE_CORE" and >> "CREDIT2_OPT_RUNQUEUE_SOCKET". >> > I was about to suggest the same. > >> Also, given that your experiments show 'core' to work quite a bit >> better than 'socket', I'd suggest making it default to core rather >> than socket. :-) >> > +1. > > Of course, as I said already, you should explain and provide the numbers > about this performance improvement in the cover letter of the series > and, IMO, reference that in the changelog of this patch too (not putting > the full results, but a quick summary of them would be good). > > Regards, > Dario -- Regards, Uma Sharma http://about.me/umasharma _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |