[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] xen: sched: improve debug dump output.



On Thu, 2017-01-26 at 13:59 -0500, Meng Xu wrote:
> Hi Dario,
> 
Hi,

> On Thu, Jan 26, 2017 at 11:52 AM, Dario Faggioli
> <dario.faggioli@xxxxxxxxxx> wrote:
> >
> >  Runqueue 0:
> >  CPU[00] runq=0, sibling=00000000,00000003, core=00000000,000000ff
> >     run: [0.15] flags=2 cpu=0 credit=5804742 [w=256] load=3655
> > (~1%)
> >  CPU[01] runq=0, sibling=00000000,00000003, core=00000000,000000ff
> >  CPU[02] runq=0, sibling=00000000,0000000c, core=00000000,000000ff
> >     run: [0.3] flags=2 cpu=2 credit=6674856 [w=256] load=262144
> > (~100%)
> >  CPU[03] runq=0, sibling=00000000,0000000c, core=00000000,000000ff
> >  RUNQ:
> 
> What is the difference between RUNQ and Runqueue 0 in the message?
> 
Right. So, this is more comprehensive output:

(XEN) [ 2797.156864] Cpupool 0:
(XEN) [ 2797.156866] Cpus: 0-5,10-15
(XEN) [ 2797.156868] Scheduler: SMP Credit Scheduler rev2 (credit2)
(XEN) [ 2797.156871] Active queues: 2
(XEN) [ 2797.156873]    default-weight     = 256
(XEN) [ 2797.156876] Runqueue 0:
(XEN) [ 2797.156878]    ncpus              = 6
(XEN) [ 2797.156879]    cpus               = 0-5
(XEN) [ 2797.156881]    max_weight         = 256
(XEN) [ 2797.156882]    instload           = 5
(XEN) [ 2797.156884]    aveload            = 1052984 (~401%)
(XEN) [ 2797.156887]    idlers: 00000000,0000002a
(XEN) [ 2797.156889]    tickled: 00000000,00000000
(XEN) [ 2797.156891]    fully idle cores: 00000000,00000000
(XEN) [ 2797.156894] Runqueue 1:
(XEN) [ 2797.156896]    ncpus              = 6
(XEN) [ 2797.156897]    cpus               = 10-15
(XEN) [ 2797.156899]    max_weight         = 256
(XEN) [ 2797.156900]    instload           = 4
(XEN) [ 2797.156902]    aveload            = 1061305 (~404%)
(XEN) [ 2797.156904]    idlers: 00000000,0000e800
(XEN) [ 2797.156906]    tickled: 00000000,00000000
(XEN) [ 2797.156908]    fully idle cores: 00000000,0000c000
(XEN) [ 2797.156910] Domain info:
(XEN) [ 2797.156912]    Domain: 0 w 256 v 16
(XEN) [ 2797.156914]      1: [0.0] flags=2 cpu=4 credit=-476120314 [w=256] 
load=85800 (~32%)
(XEN) [ 2797.156919]      2: [0.1] flags=0 cpu=2 credit=5630728 [w=256] 
load=262144 (~100%)
(XEN) [ 2797.156923]      3: [0.2] flags=0 cpu=2 credit=4719251 [w=256] 
load=262144 (~100%)
(XEN) [ 2797.156928]      4: [0.3] flags=2 cpu=2 credit=5648202 [w=256] 
load=262144 (~100%)
(XEN) [ 2797.156933]      5: [0.4] flags=2 cpu=12 credit=2735243 [w=256] 
load=262144 (~100%)
(XEN) [ 2797.156939]      6: [0.5] flags=2 cpu=12 credit=2721770 [w=256] 
load=262144 (~100%)
(XEN) [ 2797.156945]      7: [0.6] flags=0 cpu=12 credit=2150753 [w=256] 
load=262144 (~100%)
(XEN) [ 2797.156950]      8: [0.7] flags=0 cpu=14 credit=10424341 [w=256] 
load=2836 (~1%)
(XEN) [ 2797.156986]      9: [0.8] flags=0 cpu=4 credit=10500000 [w=256] 
load=14 (~0%)
(XEN) [ 2797.156991]     10: [0.9] flags=0 cpu=14 credit=10500000 [w=256] 
load=12 (~0%)
(XEN) [ 2797.156995]     11: [0.10] flags=0 cpu=5 credit=9204778 [w=256] 
load=7692 (~2%)
(XEN) [ 2797.156999]     12: [0.11] flags=0 cpu=1 credit=10501097 [w=256] 
load=2791 (~1%)
(XEN) [ 2797.157004]     13: [0.12] flags=0 cpu=4 credit=10500000 [w=256] 
load=28 (~0%)
(XEN) [ 2797.157008]     14: [0.13] flags=0 cpu=11 credit=10500000 [w=256] 
load=19 (~0%)
(XEN) [ 2797.157013]     15: [0.14] flags=0 cpu=14 credit=10500000 [w=256] 
load=388 (~0%)
(XEN) [ 2797.157017]     16: [0.15] flags=0 cpu=3 credit=9832716 [w=256] 
load=7326 (~2%)
(XEN) [ 2797.157022]    Domain: 1 w 256 v 2
(XEN) [ 2797.157024]     17: [1.0] flags=2 cpu=10 credit=-1085114190 [w=256] 
load=261922 (~99%)
(XEN) [ 2797.157029]     18: [1.1] flags=0 cpu=14 credit=10500000 [w=256] 
load=0 (~0%)
(XEN) [ 2797.157033]    Domain: 2 w 256 v 2
(XEN) [ 2797.157035]     19: [2.0] flags=2 cpu=0 credit=-593239186 [w=256] 
load=47389 (~18%)
(XEN) [ 2797.157040]     20: [2.1] flags=0 cpu=11 credit=10500000 [w=256] 
load=0 (~0%)
(XEN) [ 2797.157044] Runqueue 0:
(XEN) [ 2797.157047] CPU[00] runq=0, sibling=00000000,00000003, 
core=00000000,000000ff
(XEN) [ 2797.157050]    run: [2.0] flags=2 cpu=0 credit=-593239186 [w=256] 
load=47389 (~18%)
(XEN) [ 2797.157055] CPU[01] runq=0, sibling=00000000,00000003, 
core=00000000,000000ff
(XEN) [ 2797.157058] CPU[02] runq=0, sibling=00000000,0000000c, 
core=00000000,000000ff
(XEN) [ 2797.157061]    run: [0.3] flags=2 cpu=2 credit=5648202 [w=256] 
load=262144 (~100%)
(XEN) [ 2797.157066] CPU[03] runq=0, sibling=00000000,0000000c, 
core=00000000,000000ff
(XEN) [ 2797.157069] CPU[04] runq=0, sibling=00000000,00000030, 
core=00000000,000000ff
(XEN) [ 2797.157072]    run: [0.0] flags=2 cpu=4 credit=-476120314 [w=256] 
load=85800 (~32%)
(XEN) [ 2797.157077] CPU[05] runq=0, sibling=00000000,00000030, 
core=00000000,000000ff
(XEN) [ 2797.157080] RUNQ:
(XEN) [ 2797.157081]      0: [0.1] flags=0 cpu=2 credit=5630728 [w=256] 
load=262144 (~100%)
(XEN) [ 2797.157086]      1: [0.2] flags=0 cpu=2 credit=4719251 [w=256] 
load=262144 (~100%)
(XEN) [ 2797.157090] Runqueue 1:
(XEN) [ 2797.157093] CPU[10] runq=1, sibling=00000000,00000c00, 
core=00000000,0000ff00
(XEN) [ 2797.157096]    run: [1.0] flags=2 cpu=10 credit=-1085114190 [w=256] 
load=261922 (~99%)
(XEN) [ 2797.157101] CPU[11] runq=1, sibling=00000000,00000c00, 
core=00000000,0000ff00
(XEN) [ 2797.157104] CPU[12] runq=1, sibling=00000000,00003000, 
core=00000000,0000ff00
(XEN) [ 2797.157108]    run: [0.5] flags=2 cpu=12 credit=2715377 [w=256] 
load=262144 (~100%)
(XEN) [ 2797.157113] CPU[13] runq=1, sibling=00000000,00003000, 
core=00000000,0000ff00
(XEN) [ 2797.157118] CPU[14] runq=1, sibling=00000000,0000c000, 
core=00000000,0000ff00
(XEN) [ 2797.157121] CPU[15] runq=1, sibling=00000000,0000c000, 
core=00000000,0000ff00
(XEN) [ 2797.157125] RUNQ:
(XEN) [ 2797.157126]      0: [0.6] flags=0 cpu=12 credit=2150753 [w=256] 
load=262144 (~100%)
(XEN) [ 2797.157131]      1: [0.4] flags=0 cpu=12 credit=1732714 [w=256] 
load=262144 (~100%)

"Runqueue 0" tells that what follow is information about the pCPUs and
the vCPUs of that runqueue (it's the same label used above for, showing
more general runqueue information.

"RUNQ:" tells that what follows is the content of the runqueue, i.e.,
the vCPUs which are runnable but not running (the one that were running
have been printed already, next to the pCPU they're running on), asn
are waiting for their turn in this runqueue.

Any better?

> > diff --git a/xen/common/sched_rt.c b/xen/common/sched_rt.c
> > index 24b4b22..f2d979c 100644
> > --- a/xen/common/sched_rt.c
> > +++ b/xen/common/sched_rt.c
> > 
> > +    /* current VCPU (nothing to say if that's the idle vcpu). */
> > +    svc = rt_vcpu(curr_on_cpu(cpu));
> > +    if ( svc && !is_idle_vcpu(svc->vcpu) )
> > +    {
> > +        rt_dump_vcpu(ops, svc);
> > +    }
> 
> Maybe it is better to print the CPU number if the CPU is running an
> idle VCPU.
> The printk info could be:
>          printk("CPU[%02d]: idle\n", cpu);
> 
This is the output for RTDS, with this patch applied, as it is, with
one idle and one busy pCPU:


(XEN) [  399.222117] Scheduler: SMP RTDS Scheduler (rtds)
(XEN) [  399.222120] Global RunQueue info:
(XEN) [  399.222122] Global DepletedQueue info:
(XEN) [  399.222125] Global Replenishment Events info:
(XEN) [  399.222132] [    1.1 ] cpu 9, (10000000, 4000000), cur_b=4000000 
cur_d=399230000000 last_start=399220062444
(XEN) [  399.222136]             onQ=0 runnable=1 flags=2 effective 
hard_affinity=8-9
(XEN) [  399.222139] Domain info:
(XEN) [  399.222141]    domain: 1
(XEN) [  399.222147] [    1.0 ] cpu 8, (10000000, 4000000), cur_b=3973291 
cur_d=399230000000 last_start=399220130217
(XEN) [  399.222151]             onQ=0 runnable=0 flags=0 effective 
hard_affinity=8-9
(XEN) [  399.222157] [    1.1 ] cpu 9, (10000000, 4000000), cur_b=4000000 
cur_d=399230000000 last_start=399220062444
(XEN) [  399.222161]             onQ=0 runnable=1 flags=2 effective 
hard_affinity=8-9
(XEN) [  399.222164] CPUs info:
(XEN) [  399.222167] CPU[08]
(XEN) [  399.222169] CPU[09]
(XEN) [  399.222175] [    1.1 ] cpu 9, (10000000, 4000000), cur_b=4000000 
cur_d=399230000000 last_start=399220062444
(XEN) [  399.222179]             onQ=0 runnable=1 flags=2 effective 
hard_affinity=8-9

I don't see what adding "idle" buys us, wrt to just listing the CPU
without printing anything particular. And, on a big and mostly idle
system, it will be printed a lot of times (next to each idle pCPU).

They're running nothing, so they're quite obviously idle. It looks this
simple to me IMO.

TBH, what I don't especially like in the output above is, within the
vCPU info being printed:
 - the spaces inside '[' ']';
 - the big numbers;
 - the fact that last_start is rather useless (it's more tracing info
   than debug dump info, IMO);
 - the fact that the various queues info and CPUs info are not 
   displaed closer, and they even have "Domain info:" in between them
   (which is because of schedule.c, not sched_rt.c, I know);
 - the word "info" after "Global RunQueue", "Global DepletedQueue", 
   "Global Replenishment Events";
 - the word "Global", in the names above;
 - the onQ and runnable flags being printed, which I don't is really
   necessary or useful;
 - the lack of scheduler wide information (e.g., the tickled mask, the
   next timeout of the replenishment timer, etc);

But this is material for another patch. :-)

Going back to printing "idle" or not, also remember that this is debug
output, meant at being mostly useful for developers, or with help from
developers. And developers can easily check in the code what having
just the CPU ID printed means (in case it's not obvious, which I think
it is, or they don't remember).

That being said, it's not that I can't live with the added "idle"
indication. But I like it less and would prefer not to add it.

Thanks and Regards,
Dario
-- 
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)

Attachment: signature.asc
Description: This is a digitally signed message part

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.