[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] strange phenomenon on CPU affinity


  • To: åçç <likechou@xxxxxxxxx>
  • From: Juergen Gross <juergen.gross@xxxxxxxxxxxxxx>
  • Date: Fri, 15 Mar 2013 14:36:16 +0100
  • Cc: xen-devel@xxxxxxxxxxxxx
  • Delivery-date: Fri, 15 Mar 2013 13:37:39 +0000
  • Domainkey-signature: s=s1536a; d=ts.fujitsu.com; c=nofws; q=dns; h=X-SBRSScore:X-IronPort-AV:Received:X-IronPort-AV: Received:Received:Message-ID:Date:From:Organization: User-Agent:MIME-Version:To:CC:Subject:References: In-Reply-To:Content-Type:Content-Transfer-Encoding; b=ConDZqS92sfCx9R6zCSvlCXhpuRT6GR4RkbcKt363M6Xw5sbDGSs6uVs KMlz6CGcG0Et66AYbZrtS/sjFdJ8iNXt5JoHihfAs2X9LJd4Qt4qlEe3c E1lYreEKHrZ6YYH/lG7bHO/Ju2xtyoVQeeWnc3O2c/QJkfpCILKRTluQE 6D7GJEYJt/BCC9+qeBNVeQMqc1KppsqndRQfHHq4QtKTP3Dp1Zd/M7bR0 1TxaW0+d3uRJFX6ltvzlfQl6/YQnD;
  • List-id: Xen developer discussion <xen-devel.lists.xen.org>

On 15.03.2013 10:08, åçç wrote:
Hello,
      My testing machine has 2 quad-core CPU (It supports hyperthreading, but i
disable it in BIOS). I uses Xen 4.0.1 as the hypervisor. When I use 8 VMs to
conduct a test, CPU affinity of the VMs is very strange. Like this:

vm_name  vcpu_num  cpu_affinity
Domain-0    8      any
VM1            4      1,3,5,7
VM2            4      1,3,5,7
VM3            4      1,3,5,7
VM4            4      1,3,5,7
VM5            4      1,3,5,7
VM6            4      0,2,4,6
VM7            4      0,2,4,6
VM8            4      0,2,4,6

I do not set the CPU affinity in the configuration file, and I cannot find when
the hypervisor set the CPU affinity in the source code. In this situation, 4
VCPUs of each VM are binding to 4 PCPUs permanently, and 5 VMs run on a set of
PCPUs, and others run on the other set of PCPUs. It is unfair to these VMs.

I'd suspect NUMA optimization. xend tries to optimize domain placement by
pinning the vcpus of the domains to cores in the same NUMA domain. Normally
the overall performance is better with this optimization. You can disable it
by specifying

numa=off

as an additional xen boot parameter for the hypervisor (not the dom0 kernel).
You can see whether you have NUMA active with

xm info

It will display line(s) like:

node_to_cpu            : node0:0-3

If you see multiple nodes you have NUMA active.


Juergen

--
Juergen Gross                 Principal Developer Operating Systems
PBG PDG ES&S SWE OS6                   Telephone: +49 (0) 89 3222 2967
Fujitsu Technology Solutions              e-mail: juergen.gross@xxxxxxxxxxxxxx
Domagkstr. 28                           Internet: ts.fujitsu.com
D-80807 Muenchen                 Company details: ts.fujitsu.com/imprint.html

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.