[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2 3/4] sysctl: Add sysctl interface for querying PCI topology



On Wed, 2015-01-07 at 10:54 -0500, Boris Ostrovsky wrote:
> On 01/07/2015 10:17 AM, Jan Beulich wrote:
> >> This is the same information (pxm -> node mapping ) that we provide in
> >> XEN_SYSCTL_topologyinfo (renamed in this series to
> >> XEN_SYSCTL_cputopoinfo). Given that I expect the two topologies to be
> >> used together I think the answer is yes.
> > Building your argumentation on potentially mis-designed existing
> > interfaces is bogus. The question is - what use is a Xen internal
> > node number to a caller of a particular hypercall (other than it
> > being purely informational, e.g. for printing human readable
> > output)?
> 
> Just as with knowing CPU/memory topology --- this will help with placing 
> a guest if we know what "proximity domain" both the device and the 
> CPUs/memory belong to.
> 
FWIW, my view on how IONUMA information could be useful is this: either
somewhere inside toolstack, automatically, or by hand, one may want to
reason as follows:

"Ehi, network card X is on node #2, let's place domain d, to which I'm
passing through such card, on node #2 (or as much and as close as
possible to node #2), to get best performance!"

Of course, when inside the toolstack, we can do this automatically:

"Domain d does not come with any affinity/placement information, but it
is passed through net card X, which is on node #2, so let's try to place
it on node #2 too"

So, I think I agree with Boris that at least consistency is necessary.
Right now, pretty much everything at both the toolstack (libxl) and
command line (xl, for doing things by hand) level, uses node IDs
reported by the hypervisor, as this series is also doing.

In particular, at the command line level, affinity, cpupools, vNUMA (as
per Wei's patches)... Everything speaks "node ID language", AFAICT.

There probably would not be too serious issues in converting everything
to PXM, or adding duplicates, but I don't see the reason why we should
do such a thing... Perhaps I'm missing what using PXM would actually buy
us?

> And if we are going to keep this as a sysctl then we need to be 
> consistent with what we do now for CPUs, which is pxm2node[]. Or change 
> CPU topology sysctl as well, 
>
Indeed, and not only that.

> which I don't think is a good idea.
> 
Me neither.

Regards,
Dario

-- 
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)

Attachment: signature.asc
Description: This is a digitally signed message part

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.