[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-API] VCPUs-at-startup and VCPUs-max with NUMA node affinity



On 29 May 2012, at 13:00, James Bulpin wrote:

>  2. The CPU affinity is dropped allowing vCPUs to run on any node - the 
> memory is still on the original node so now we've got a poor placement for 
> vCPUs that happen to end up running on other nodes. This also leads to 
> additional interconnect traffic and possible cache line ping-pong.

Is there a memory-swap operation available to exchange pages from one NUMA 
domain for pages from another? I'm thinking of a scenario where CPU hotplugs 
have led to allocated memory being on the wrong NUMA domain entirely.  Is the 
only way for the guest to resolve this by live migrating back to localhost so 
that it goes through a suspend/resume cycle?

Right now we see performance like this all the time (on non-NUMA Xen) since 
memory is usually allocated from a single NUMA domain; e.g. on a 48-core 
Magny-cours, notice unix domain socket latency grows worse as it spreads away 
from vCPU 0 (which also happens to be on NUMA domain 0); 
http://www.cl.cam.ac.uk/research/srg/netos/ipc-bench/details/tmpwlnFNM.html

-anil
_______________________________________________
Xen-api mailing list
Xen-api@xxxxxxxxxxxxx
http://lists.xen.org/cgi-bin/mailman/listinfo/xen-api


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.