[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-users] cpu affinity



Subject: cpu affinity

Hello,

I work in a hosting company, and we notices some trouble with automatic
cpu-affinity on our latests servers.

We end up having a VM with like 16 vcpu with an affinity to run on only
12 of our 24 cpu, leaving alot of cpu completly unused, while
overbooking the others (leading to very bad performances when that vm
loads).

We try not to overbook our servers, in order to always have at least 1
cpu for each vcpu. Some of our tests to try to understand what is going
on is shown bellow.

Are we doing something wrong ?
What are the best practice for cpu affinity and pinning ?
Any impact on vm migration ?

------------------------------------------------------------------------

Following tests have been done on a bi - AMD Opteron(tm) Processor 6174 (that
makes 24 cores), with 48GB of RAM.

xm info shows 4 node_to_cpu association :
node0:0,2,4,6,8,10
node1:12,14,16,18,20,22
node2:13,15,17,19,21,23
node3:1,3,5,7,9,11

Xen is started with the following options:
dom0_mem=512M dom0_max_vcpus=2 dom0_vcpus_pin

the dom0 has 2 vcpu, pinned to use vcpu 0 and vcpu 1.

We are using xm create, xm vcpu-list, and xm destroy. Configuration
files change between test with the following lines :
vcpus = '1'
memory = '2000'

------------------------------------------

First, lets test with just 1 vm, in addition to our dom0.

vcpu    memory  cpu affinity
1       2000    12,14,16,18,20,22
1       16000   12,14,16,18,20,22
1       48000   12,14,16,18,20,22
6       48000   12,14,16,18,20,22
7       2000    12-23
13      2000    0,2,4,6,8,10,12-23
17      2000    any
24      2000    any
24      48000   any

Ok, with just one VM, it gets the cpu of as many nodes as required to
get to the point, and the memory just doesn't matter.

------------------------------------------

Lets try with 2 VM.

vm      vcpu    memory  cpu affinity
1       7       2000    12-23
2       14      2000    0-11

2 vcpu for dom0, 7 for vm1, 14 for vm 2... total: 23 cpu out of 24

but, 12 cpu running dom0 AND the 14 cpu from vm2 (16 vcpu on 12 cpu)
the other 12 cpu having to handle only 7 from the vm1, average of 5 of
them being just... free.

Here, we have a vm with 14 vcpu running on 12! That one is overbooked
just by itself, without carring of what still runs or not.

------------------------------------------
vm      vcpu    memory  cpu affinity
1       8       2000    12-23
2       16      2000    0,2,4,6,8,10,12-23

dom0 runs on vcpu 0 and 1.
vm 1 runs on 8 out of 12 cpus 12-23
vm 2 runs on 12 out of 18 cpus 0,2,4,6,8,10,12-23
cpus 3,5,7,9,11 are just unused

if we consider the group 0,2,4,6,8,10,12-23,
it runs 1 + 8 + 12 = 21 vcpu.
21 vcpu being associated with a group of 18 cpu, and still having 5 cpu
that don't have any vm associated at all.

------------------------------------------

vm      vcpu    memory  cpu affinity
[old]   14      2000    0-11,13,15,17,19,21,23
1       8       2000    12,14,16,18,20,22
2       16      2000    12-23

I had a vm running using tons of cpu, i stopped when i noticed, but it
did not change anything about the other cpu affinity.

I ended up with :
1 vm with 8 vcpu running on 6.
1 vm with 16, running on 12, 6 of them being shared
10 cpu being just unused.

------------------------------------------

--
Adrien URBAN, Expert Systèmes - Réseaux - Sécurité - Responsable SN3
---
www.nbs-system.com, 140 Bd Haussmann, 75008 Paris
Std: +33 158 566 080 / S.Tech: +33 158 566 088 / Fax: +33 158 566 081
Bargento 2012, le 29 mai 2012 au CNIT : www.bargento.com


_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxx
http://lists.xen.org/xen-users


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.