[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] xend: Fix non-contiguous NUMA node assignment



Keir Fraser wrote:
I had a go myself: see c/s 20817. This keeps nr_nodes as well as
max_node_id, and continues to use it where it seemed to make sense to do so.

c/s 20817 seems OK to me. Actually that was how I started, but then decided to drop max_node_id because I didn't see the sense in keeping two variables which actually differ just by 1. But I didn't consider chunk #3 of XendDomainInfo.py, which actually also fixes another bug.

Thanks!
Regards,
Andre.


 -- Keir

On 17/01/2010 17:48, "Keir Fraser" <keir.fraser@xxxxxxxxxxxxx> wrote:

nr_nodes was always num_online_nodes() returned by Xen -- not accounting for
holes in node id space. Hance I emulated that behaviour from the Python
extension package. If what you actually want everywhere in the Python code
is max_node_id, then please remove the nr_nodes code from xc.c and all
references to it from the Python code. I agree that using max_node_id seems
more correct than nr_nodes -- the intention was for someone to plumb that
new field properly into the Python code anyway.

 -- Keir

On 15/01/2010 13:28, "Andre Przywara" <andre.przywara@xxxxxxx> wrote:

Hi,

it seems that I missed a point in this whole addition of max_node_id. I
see the difference in the Xen HV part, so nr_nodes got replaced with
max_node_id in physinfo_t (and xc_physinfo_t, respectively).
But where does this value help in xend? There is no single Python
reference to the physinfo()'s max_node_id field, instead all functions
use the old (but now bogus) nr_nodes variable.
So in the attached patch I kept the xc.physinfo() returned dictionary
with only a nr_nodes field, calculated by simply adding 1 to max_node_id
from libxc. Empty nodes can (and will) be detected by iterating through
the node_to_cpus and node_to_memory lists.
Nodes without memory should not be considered during guest's memory
allocation, but will be used for further CPU affinity setting if the
number of VCPUs exceeds the number of cores per node.

Please correct me if I am totally wrong on this, but this seems to work
much better in my case.

Regards,
Andre.

Signed-off-by: Andre Przywara <andre.przywara@xxxxxxx>


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel




--
Andre Przywara
AMD-Operating System Research Center (OSRC), Dresden, Germany
Tel: +49 351 488-3567-12
----to satisfy European Law for business letters:
Advanced Micro Devices GmbH
Karl-Hammerschmidt-Str. 34, 85609 Dornach b. Muenchen
Geschaeftsfuehrer: Andrew Bowd; Thomas M. McCoy; Giuliano Meroni
Sitz: Dornach, Gemeinde Aschheim, Landkreis Muenchen
Registergericht Muenchen, HRB Nr. 43632


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.