[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] RE: Host Numa informtion in dom0



Ian Pratt wrote on Fri, 5 Feb 2010 at 09:39:09:

>>    Attached is the patch which exposes the host numa information to
> dom0.
>> With the patch "xm info" command now also gives the cpu topology & host
>> numa information. This will be later used to build guest numa support.
>> 
>> The patch basically changes physinfo sysctl, and adds topology_info &
>> numa_info sysctls, and also changes the python & libxc code
> accordingly.
> 
> 
> It would be good to have a discussion about how we should expose NUMA
> information to guests.
> 
> I believe we can control the desired allocation of memory from nodes and
> creation of guest NUMA tables using VCPU affinity masks combined with a
> new boolean option to enable exposure of NUMA information to guests.
> 

I agree. 

> For each guest VCPU, we should inspect its affinity mask to see which
> nodes the VCPU is able to run on, thus building a set of 'allowed node'
> masks. We should then compare all the 'allowed node' masks to see how
> many unique node masks there are -- this corresponds to the number of
> NUMA nodes that we wish to expose to the guest if this guest has NUMA
> enabled. We would aportion the guest's pseudo-physical memory equally
> between these virtual NUMA nodes.
> 

Right.

> If guest NUMA is disabled, we just use a single node mask which is the
> union of the per-VCPU node masks.
> 
> Where allowed node masks span more than one physical node, we should
> allocate memory to the guest's virtual node by pseudo randomly striping
> memory allocations (in 2MB chunks) from across the specified physical
> nodes. [pseudo random is probably better than round robin]

Do we really want to support this? I don't think the allowed node masks should 
span more than one physical NUMA node. We also need to look at I/O devices as 
well.

> 
> Make sense? I can provide some worked exampled.
> 

Examples are appreciated.

Thanks,
Jun
___
Intel Open Source Technology Center




_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.