[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] BUG: sched=credit2 crashes system when using cpupools
On Thu, 2018-08-30 at 18:49 +1000, Steven Haigh wrote: > On 2018-08-30 18:33, Jan Beulich wrote: > > > > > > > Anyway - as Jürgen says, something for the scheduler > > maintainers to look into. > Ok, I'm back. > Yep - I just want to confirm that we tested this in BOTH NUMA > configurations - and credit2 crashed on both. > > I switched back to sched=credit, and it seems to work as expected: > # xl cpupool-list > Name CPUs Sched Active Domain count > Pool-node0 12 credit y 3 > Pool-node1 12 credit y 0 > Wait, in a previous message, you said: "A machine where we could get this working every time shows". Doesn't that mean creating a separate pool for node 1 works with both Credit and Credit2, if the node has memory? I mean, trying to clarifying, my understanding is that you have to systems: system A: node 1 has *no* memory system B: both node 0 and node 1 have memory Creating a Credit pool with pcpus from node 1 always work on both systems. OTOH, when you try to create a Credit2 pool with pcpus from node 1, does it always crash on both systems, or does it work on system B and crashes on system A ? I do have a NUMA box with RAM in both nodes (so similar to system B). Last time I checked, what you're trying to do worked there, pretty much with any scheduler combination, but I'll recheck. I don't have a box similar to system A. I'll try to remove some of the RAM from that NUMA box, and check what happens. Regards, Dario -- <<This happens because I choose it to happen!>> (Raistlin Majere) ----------------------------------------------------------------- Dario Faggioli, Ph.D, http://about.me/dario.faggioli Software Engineer @ SUSE https://www.suse.com/ Attachment:
signature.asc _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxxx https://lists.xenproject.org/mailman/listinfo/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |