[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] PV-vNUMA issue: topology is misinterpreted by the guest



On 07/24/2015 04:44 PM, Dario Faggioli wrote:
On Fri, 2015-07-24 at 12:28 +0200, Juergen Gross wrote:
On 07/23/2015 04:07 PM, Dario Faggioli wrote:

FWIW, I was thinking that the kernel were a better place, as Juergen is
saying, while now I'm more convinced that tools would be more
appropriate, as Boris is saying.

I've collected some information from the linux kernel sources as a base
for the discussion:

That's great, thanks for this!

The complete numa information (cpu->node and memory->node relations) is
taken from the acpi tables (srat, slit for "distances").

Ok. And I already have a question (as I lost track of things a bit).
What you just said about ACPI tables is certainly true for baremetal and
HVM guests, but for PV? At the time I was looking into it, together with
Elena, there were Linux patches being produced for the PV case, which
makes sense.
However, ISTR that both Wei and Elena mentioned recently that those
patches have not been upstreamed in Linux yet... Is that the case? Maybe
not all, but at least some of them are there? Because if not, I'm not
sure I see how a PV guest would even see a vNUMA topology (which it
does).

Of course, I can go and check, but since you just looked, you may have
it fresh and clear already. :-)

I checked "bottom up", so when I found the acpi scan stuff I stopped
searching how the kernel obtains numa info. During my search I found no
clue of an pv-numa stuff in the kernel. And a quick "grep -i numa" in
arch/x86/xen and drivers/xen didn't reveal anything. Same for a complete
kernel source search for "vnuma".


The topology information is obtained via:
- intel:
    + cpuid leaf b with subleafs, leaf 4
    + cpuid leaf 2 and/or leaf 1 if leaf b and/or 4 isn't available
- amd:
    + cpuid leaf 8000001e, leaf 8000001d, leaf 4
    + msr c001100c
    + cpuid leaf 2 and/or leaf 1 if leaf b and/or 4 isn't available

The scheduler is aware of:
- smt siblings (from topology)
- last-level-cache siblings (from topology)
- node siblings (from numa information)

Right. So, this confirms what we were guessing: we need to "reconcile"
these two sources of information (from the guest point of view).

Both the 'in kernel' and 'in toolstack' approach should have all the
necessary information to make things match, I think. In fact, in
toolstack, we know what the vNUMA topology is (we're parsing and
actually putting it in place!). In kernel, we know it as we read it from
tables or hypercalls (isn't that so, for PV guest?).

In fact, I think that it is the topology, i.e., what comes from MSRs,
that needs to adapt, and follow vNUMA, as much as possible. Do we agree
on this?

I think we have to be very careful here. I see two possible scenarios:

1) The vcpus are not pinned 1:1 on physical cpus. The hypervisor will
   try to schedule the vcpus according to their numa affinity. So they
   can change pcpus at any time in case of very busy guests. I don't
   think the linux kernel should treat the cpus differently in this
   case as it will be in vane regarding the Xen scheduler's activity.
   So we should use the "null" topology in this case.

2) The vcpus of the guest are all pinned 1:1 to physical cpus. The Xen
   scheduler can't move vcpus between pcpus, so the linux kernel should
   see the real topology of the used pcpus in order to optimize for this
   picture.

This only covers the scheduling aspect, of course.


IMO, the thing boils down to these:

  1) from where (kernel vs. toolstack) is it the most easy and effective
     to enact the CPUID fiddling? As in, can we do that in toolstack?
     (Andrew was not so sure, and Boris found issues, although Jan seems
     to think they're no show stopper.)
     I'm quite certain that we can do that from inside the kernel,
     although, how early would we need to be doing it? Do we have the
     vNUMA info already?

  2) when tweaking the values of CPUID and other MSRs, are there other
     vNUMA (and topology in general) constraints and requirements we
     should take into account? For instance, do we want, for licensing
     reasons, all (or most) of the vcpus to be siblings, rather than full
     sockets? Etc.
      2a) if yes, how and where are these constraints specified?

If looking at 1) only, it still looks to me that doing things within the
kernel would be the way to go.

When looking at 2), OTOH, toolstacks variants start to be more
appealing. Especially depending on our answer to 2a). In fact,
in case we want to give the user a way to specify this
siblings-vs-cores-vs-sockets information, it IMO would be good to deal
with that in tools, rather than having to involve Xen or Linux!

It will especially move tasks from one cpu to another first between smt
siblings, second between llc siblings, third between node siblings and
last all cpus.

Yep, this part, I knew.

Maybe, there is room for "fixing" this at this level, hooking up inside
the scheduler code... but I'm shooting in the dark, without having check
whether and how this could be really feasible, should I?

Uuh, I don't think a change of the scheduler on behalf of Xen is really
appreciated. :-)

I'd rather fiddle with the cpu masks on the different levels to let the
scheduler do the right thing.

One thing I don't like about this approach is that it would potentially
solve vNUMA and other scheduling anomalies, but...

cpuid instruction is available for user mode as well.

...it would not do any good for other subsystems, and user level code
and apps.

Indeed. I think the optimal solution would be two-fold: give the
scheduler the information it is needing to react correctly via a
kernel patch not relying on cpuid values and fiddle with the cpuid
values from xen tools according to any needs of other subsystems and/or
user code (e.g. licensing).


Juergen


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.