[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] BUG: sched=credit2 crashes system when using cpupools



On 2018-08-30 18:33, Jan Beulich wrote:
On 30.08.18 at 06:01, <netwiz@xxxxxxxxx> wrote:
Managed to get the same crash log when adding CPUs to Pool-1 as follows:

Create the pool:
(XEN) Initializing Credit2 scheduler
(XEN)  load_precision_shift: 18
(XEN)  load_window_shift: 30
(XEN)  underload_balance_tolerance: 0
(XEN)  overload_balance_tolerance: -3
(XEN)  runqueues arrangement: socket
(XEN)  cap enforcement granularity: 10ms
(XEN) load tracking window length 1073741824 ns

Add the CPUs:
(XEN) Adding cpu 12 to runqueue 0
(XEN)  First cpu on runqueue, activating
(XEN) Removing cpu 12 from runqueue 0
(XEN) Adding cpu 13 to runqueue 0
(XEN) Removing cpu 13 from runqueue 0
(XEN) Adding cpu 14 to runqueue 0
(XEN) Removing cpu 14 from runqueue 0
(XEN) Xen BUG at sched_credit2.c:3452

credit2 still not being the default - do things work if you don't override
the default (of using credit1)? I guess the problem is connected to the
"Removing cpu <N> from runqueue 0", considering this

    BUG_ON(!cpumask_test_cpu(cpu, &rqd->active));

is what triggers. Anyway - as Jürgen says, something for the scheduler
maintainers to look into.

Yep - I just want to confirm that we tested this in BOTH NUMA configurations - and credit2 crashed on both.

I switched back to sched=credit, and it seems to work as expected:
# xl cpupool-list
Name               CPUs   Sched     Active   Domain count
Pool-node0          12    credit       y          3
Pool-node1          12    credit       y          0

I've updated the subject - as this isn't a NUMA issue at all.

--
Steven Haigh

? netwiz@xxxxxxxxx     ? http://www.crc.id.au
? +61 (3) 9001 6090    ? 0412 935 897

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.