[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [RFC PATCH v2 1/2] xen: credit2: rb-tree for runqueues



On Wed, 2019-01-23 at 23:00 +0530, Praveen Kumar wrote:
> Hi Dario,
>
Hi,

> Thanks for your comments.
> 
So, two things. Can you please not top post? :-)

Also, trim the quotes, which means, when quoting while replying, which
is the good thing to do, remove the stuff that are not necessary for
understanding what you are replying to.

This avoids people having to scroll down for pages and pages, before
finding what you wrote. Even worse, lack of trimming, especially if
combined with top posting, like in this email, may make people think
that there's no other content in your mail, besides the top-posted
one! :-O

For instance, the first time I opened this message of yours, I thought
like that and did not scrolled down. It was only on second thoughts
that I came back and went double checking. :-)

Trying to make all the above clear, what I mean is do like this:

> On Fri, Jan 18, 2019 at 8:38 PM Dario Faggioli <dfaggioli@xxxxxxxx>
> wrote:
> > On Sun, 2018-12-23 at 19:51 +0530, Praveen Kumar wrote:
> > > --- a/xen/common/sched_credit2.c
> > > +++ b/xen/common/sched_credit2.c
> > > 
> > > @@ -3762,8 +3784,8 @@ csched2_dump(const struct scheduler *ops)
> > >              dump_pcpu(ops, j);
> > > 
> > >          printk("RUNQ:\n");
> > > -        list_for_each( iter, runq )
> > > -        {
> > > +
> > > +        for (iter = rb_last(runq); iter != NULL; iter =
> > > rb_prev(iter)) {
> > >              struct csched2_vcpu *svc = runq_elem(iter);
> > > 
> > >              if ( svc )
> > > 
> > Ok, this makes sense. Have you verified that the runqueue is
> > printed in
> > credits order in the dump?
> > 
> 
> Yes, I have dumped this using 'xl debug-keys r'
> 
> dmesg output :
> ...
> (XEN) sched_smt_power_savings: disabled
> (XEN) NOW=549728084734
> (XEN) Online Cpus: 0-3
> (XEN) Cpupool 0:
> (XEN) Cpus: 0-3
> (XEN) Scheduler: SMP Credit Scheduler rev2 (credit2)
> (XEN) Active queues: 1
> (XEN) default-weight     = 256
> (XEN) Runqueue 0:
> (XEN) ncpus              = 4
> (XEN) cpus               = 0-3
> (XEN) max_weight         = 256
> (XEN) pick_bias          = 1
> (XEN) instload           = 1
> (XEN) aveload            = 10002 (~3%)
> (XEN) idlers: 7
> (XEN) tickled: 0
> (XEN) fully idle cores: 7
> (XEN) Domain info:
> (XEN) Domain: 0 w 256 c 0 v 4
> (XEN)   1: [0.0] flags=2 cpu=3 credit=9803083 [w=256] load=517 (~0%)
> (XEN)   2: [0.1] flags=0 cpu=1 credit=10404026 [w=256] load=239 (~0%)
> (XEN)   3: [0.2] flags=0 cpu=0 credit=10369899 [w=256] load=2193
> (~0%)
> (XEN)   4: [0.3] flags=0 cpu=2 credit=10500000 [w=256] load=1354
> (~0%)
> (XEN) Runqueue 0:
> (XEN) CPU[00] runq=0, sibling=1, core=f
> (XEN) CPU[01] runq=0, sibling=2, core=f
> (XEN) CPU[02] runq=0, sibling=4, core=f
> (XEN) CPU[03] runq=0, sibling=8, core=f
> (XEN) run: [0.0] flags=2 cpu=3 credit=9803083 [w=256] load=517 (~0%)
> (XEN) RUNQ:
> 
Well, ok, but in this case there's no vcpu sitting in the runqueue. So,
we can't really see whether the runqueue is actually kept in the proper
order.

Not that a test like this is too valuable and, now that I think more
about this, it's probably worth putting some ASSERT()-s in the code to
properly make sure of that... but still, it's something.

In order to actually have and see vcpus within the scheduler runqueues,
you can start a lot of domains --at least enough for having more vcpus
than pcpus-- and run some synthetic workload (like `yes') inside them,
so their vcpu will be busy.

Playing with pinning and cpupools can also help.

In the end, what you want is to have something printed after that:

 (XEN) RUNQ:

marker, in the dump.

> Will send the updated patch soon. Thanks.
> 
Cool, thanks. :-)
Dario
-- 
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Software Engineer @ SUSE https://www.suse.com/

Attachment: signature.asc
Description: This is a digitally signed message part

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.