[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH 2/2] xen: credit2: limit the max number of CPUs in a runqueue



On 27.05.20 00:00, Dario Faggioli wrote:
Hey,

thanks for the review, and sorry for replying late... I was busy with
something and then was trying to implement a better balancing logic, as
discussed with Juergen, but with only partial success...

On Thu, 2020-04-30 at 08:45 +0200, Jan Beulich wrote:
On 29.04.2020 19:36, Dario Faggioli wrote:
@@ -852,14 +862,61 @@ cpu_runqueue_match(const struct
[...]
+        ASSERT(rcpu != cpu);
+        if ( !cpumask_test_cpu(rcpu, cpumask_scratch_cpu(cpu)) )
+        {
+            /*
+             * For each CPU already in the runqueue, account for
it and for
+             * its sibling(s), independently from whether such
sibling(s) are
+             * in the runqueue already or not.
+             *
+             * Of course, if there are sibling CPUs in the
runqueue already,
+             * only count them once.
+             */
+            cpumask_or(cpumask_scratch_cpu(cpu),
cpumask_scratch_cpu(cpu),
+                       per_cpu(cpu_sibling_mask, rcpu));
+            nr_smts += nr_sibl;

This being common code, is it appropriate to assume all CPUs having
the same number of siblings?

You mention common code because you are thinking of differences between
x86 and ARM? In ARM --althought there might be (I'm not sure)-- chips
that have SMT, or that we may want to identify and treat like if it was
SMT, we currently have no support for that, so I don't think it is a
problem.

On x86, I'm not sure I am aware of cases where the number of threads is
different among cores or sockets... are there any?

Besides, we have some SMT specific code around (especially in
scheduling) already.

Even beyond that, iirc the sibling mask
represents the online or parked siblings, but not offline ones. For
the purpose here, don't you rather care about the full set?

This is actually a good point. I indeed care about the number of
siblings a thread has, in general, not only about the ones that are
currently online.

In v2, I'll be using boot_cpu_data.x86_num_siblings, of course wrapped
in an helper that just returns 1 for ARM. What do you think, is this
better?

What about HT vs AMD Fam15's CUs? Do you want both to be treated
the same here?

Are you referring to the cores that, AFAIUI, share the L1i cache? If
yes, I thought about it, and ended up _not_ dealing with them here, but
I'm still a bit unsure.

Cache oriented runqueue organization will be the subject of another
patch series, and that's why I kept them out. However, that's a rather
special case with a lot in common to SMT... Just in case, is there a
way to identify them easily, like with a mask or something, in the code
already?

Also could you outline the intentions with this logic in the
description, to be able to match the goal with what gets done?

Sure, I will try state it more clearly.

@@ -900,6 +990,12 @@ cpu_add_to_runqueue(struct csched2_private
*prv, unsigned int cpu)
          rqd->pick_bias = cpu;
          rqd->id = rqi;
      }
+    else
+        rqd = rqd_valid;
+
+    printk(XENLOG_INFO "CPU %d (sibling={%*pbl}) will go to
runqueue %d with {%*pbl}\n",
+           cpu, CPUMASK_PR(per_cpu(cpu_sibling_mask, cpu)), rqd-
id,
+           CPUMASK_PR(&rqd->active));

Iirc there's one per-CPU printk() already. On large systems this
isn't
very nice, so I'd like to ask that their total number at least not
get
further grown. Ideally there would be a less verbose summary after
all
CPUs have been brought up at boot, with per-CPU info be logged only
during CPU hot online.

Understood. Problem is that, here in the scheduling code, I don't see
an easy way to tell when we have finished bringing up CPUs... And it's
probably not worth looking too hard (even less adding logic) only for
the sake of printing this message.

cpupool_init() is the perfect place for that.


Juergen



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.