[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] How To Pin domU VCPU To Specific CPU During Instance Creation


  • To: adriant@xxxxxxxxxx
  • From: "Todd Deshane" <deshantm@xxxxxxxxx>
  • Date: Tue, 8 Jul 2008 10:25:16 -0400
  • Cc: xen-users@xxxxxxxxxxxxxxxxxxx
  • Delivery-date: Tue, 08 Jul 2008 07:25:54 -0700
  • Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=message-id:date:from:reply-to:to:subject:cc:in-reply-to :mime-version:content-type:references; b=Ws/cvxrFVjTsCqTld6JNOnc+EYp3JqpZHxif2YQO98/HpR5eiMQB/SwbKo1Rk01CRD p+eTyw1TOC0XViUN0FQSdvex8kxukit+DDWhj70jYeJcuaGQU+lFxBHtYUR5I9w4gU20 6jsjPvApGZBoviGIe+Y+s5FtRkOnabanTlPjU=
  • List-id: Xen user discussion <xen-users.lists.xensource.com>



On Tue, Jul 8, 2008 at 6:54 AM, Adrian Turcu <adriant@xxxxxxxxxx> wrote:
Hi all

I was browsing the archives to find a solution to my "problem" but with no luck.
Here is the scenario:

Host:
Hardware: Dell PE 1950, 4 x dual core CPU, 16GB RAM
OS: FC8, kernel 2.6.21-2950.fc8xen
Xen version: 3.1.0-rc7-2950.fc8

Guests:
OS: FC8, kernel 2.6.21-2950.fc8xen

I want to be able during guest instance creation to pin down each of the VCPUs to specific CPU cores.
I can do that after the instance is up by using "xm vcpu-pin" command, but I would love to be able to do it straight from the config file.

two config files:

### shared-db4
kernel = "/boot/vmlinuz-2.6.21-2950.fc8xen"
ramdisk = "/boot/initrd-2.6.21-2950.fc8xen-domU.img"
name = "shared-db4"
memory = 8192
cpus = "4,5"
vcpus = 2
vif = [ 'mac=00:16:3E:13:02:01, bridge=br162', 'mac=00:16:3E:13:04:01, bridge=br164' ]
disk = [ 'phy:disk/by-path/ip-nas01-681:3260-iscsi-iqn.2008-02.com.newbay.celerra.domu-root-lun-0-part1,hda1,r' ,
'phy:disk/by-path/ip-nas01-681:3260-iscsi-iqn.2008-02.com.newbay.celerra.domu-00163e130001-lun-0-part1,hdb1,w' ,
'phy:disk/by-path/ip-nas01-681:3260-iscsi-iqn.2008-02.com.newbay.celerra.domu-00163e130001-lun-1-part1,hdc1,w' ,
'phy:disk/by-path/ip-nas01-681:3260-iscsi-iqn.2008-02.com.newbay.celerra.domu-00163e130001-lun-2-part1,hdd1,w' ]
root = "/dev/hda1 ro"
extra = "3 selinux=0 enforcing=0"
> on_reboot   = 'restart'
on_crash    = 'restart'



### shared-smq6
kernel = "/boot/vmlinuz-2.6.21-2950.fc8xen"
ramdisk = "/boot/initrd-2.6.21-2950.fc8xen-domU.img"
name = "shared-smq6"
memory = 2560
cpus = "1,2"
vcpus = 2
vif = [ 'mac=00:16:3E:13:03:03, bridge=br163' ]
disk = [ 'phy:disk/by-path/ip-nas01-681:3260-iscsi-iqn.2008-02.com.newbay.celerra.domu-root-lun-0-part1,hda1,r' ,
'phy:disk/by-path/ip-nas01-681:3260-iscsi-iqn.2008-02.com.newbay.celerra.domu-00163e130003-lun-0-part1,hdb1,w' ,
'phy:disk/by-path/ip-nas01-681:3260-iscsi-iqn.2008-02.com.newbay.celerra.domu-00163e130003-lun-1-part1,hdc1,w' ,
'phy:disk/by-path/ip-nas01-681:3260-iscsi-iqn.2008-02.com.newbay.celerra.domu-00163e130003-lun-2-part1,hdd1,w' ]
root = "/dev/hda1 ro"
extra = "3 selinux=0 enforcing=0"
> on_reboot   = 'restart'
on_crash    = 'restart'


"xm vcpu-list" output:
Name                              ID  VCPU   CPU State   Time(s) CPU Affinity
Domain-0                           0     0     0   r--  118567.5 any cpu
Domain-0                           0     1     -   --p       2.9 any cpu
Domain-0                           0     2     -   --p      30.4 any cpu
Domain-0                           0     3     -   --p       2.2 any cpu
Domain-0                           0     4     -   --p       3.2 any cpu
Domain-0                           0     5     -   --p       2.0 any cpu
Domain-0                           0     6     -   --p       2.0 any cpu
Domain-0                           0     7     -   --p       3.8 any cpu
shared-db4                         6     0     4   r--  446383.3 4
shared-db4                         6     1     5   -b-   89830.3 5
shared-smq4                        2     0     6   -b-   53710.6 6-7
shared-smq4                        2     1     6   -b-   87263.8 6-7
shared-smq6                        5     0     1   -b-   21681.7 1-2
shared-smq6                        5     1     1   -b-   31198.6 1-2

shared-db4 was altered after instance creation by using "xm vcpu-pin shared-db4 0 4 ; xm vcpu-pin shared-db4 1 5",
the rest of the guests are as they were created using "xm create <config file>" command or automatically started at host reboot (/etc/xen/auto folder).

Don't know if this has an impact or not, but I am using sedf scheduler and I have a cron job which sets weight=1 for all newly created instances:
#!/bin/bash

# change weigth to 1
/usr/sbin/xm sched-sedf | grep -v Name | tr -s ' ' | cut -d\  -f7,1 | while read a b ; do if [ $b -eq 0 ] ; then /usr/sbin/xm sched-sedf $a -w1 ; fi ; done


The reason:

I can see in the guest domains a lot of percentage spent in "CPU Steal" column
when the systems are under heavy CPU pressure.
Changing the CPU affinity on each VCPU seem to keep "CPU steal" in the guests to almost 0 during similar system loads.

I also came across this old article (maybe still valid):

http://virt.kernelnewbies.org/ParavirtBenefits

which in particular states:

"The time spent waiting for a physical CPU is never billed against a process,
allowing for accurate performance measurement even when there is CPU time contention between *multiple virtual machines*.

The amount of time the virtual machine slowed down due to such CPU time contention is split out as so-called "steal time"
in /proc/stat and properly displayed in tools like vmstat(1), top(1) and sar(1)."

Is this because the CPU affinity is shared with Domain-0?
Maybe I am mixing stuff here, nevertheless, I'd like to be able to pin each VCPU to a physical CPU core (if that makes sense).


Thank you in advance,
Adrian


_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users



--
Todd Deshane
http://todddeshane.net
check out our book: http://runningxen.com
_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.