[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] How To Pin domU VCPU To Specific CPU During Instance Creation


  • To: adriant@xxxxxxxxxx
  • From: "Todd Deshane" <deshantm@xxxxxxxxx>
  • Date: Tue, 8 Jul 2008 10:56:33 -0400
  • Cc: xen-users@xxxxxxxxxxxxxxxxxxx
  • Delivery-date: Tue, 08 Jul 2008 07:57:34 -0700
  • Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=message-id:date:from:reply-to:to:subject:cc:in-reply-to :mime-version:content-type:references; b=ZEB3A/MJ4CO7HGZC7UbhdCMjlyNW/9qzmsydX5QvkhcrmS+zcHvtv/V32F/u0OlhBp XHMWwbpXrV0n/Ki47cJSnUXojr3uPH3kKk0ImBNIOxzJ+0/bwE7CWxXKWVz3lJvuhFzg WoH1280EA3ViYDP4RusONa3YHpnlEDc/V/3Wc=
  • List-id: Xen user discussion <xen-users.lists.xensource.com>



On Tue, Jul 8, 2008 at 10:47 AM, Adrian Turcu <adriant@xxxxxxxxxx> wrote:
Thanks for the quick reply Todd, but I guess my problem is not to exclude certain CPUs to be used by the guests,
but to pin down VCPUs to specific CPUs when using a list.
Take this one for example on my config:

### shared-smq6
cpus = "1,2"
vcpus = 2

That means, I use a circular list of CPU 1 and CPU 2, and 2 VCPUs which can pick any from the list.
This is true as per output of "xm vcpu-list shared-smq6" command:

Name                              ID  VCPU   CPU State   Time(s) CPU Affinity
shared-smq6                        5     0     1   -b-   21713.0 1-2
shared-smq6                        5     1     1   -b-   31214.3 1-2

What I would like is to be able to say in the config file directly, i.e. "use CPU 1 for VCPU 0 and CPU 2 for VCPU 1"
At the moment I can do that only by using "xm vcpu-pin" command.

If that is already in those threads, I cannot see it to be honest. Could you just sent the kind of config you envisage by using ^ ?

I actually don't have a lot of personal experience with vcpu pinning.

That thread I gave you was the first time I saw the syntax for it.

Any thoughts or experiences from others?

If after a day or two on the users list and no response/no solutions. Feel free to post a fresh post to xen-devel with all the details of what you have tried, what works etc.

If it was me, I would try to read through the source code to find the answer. I can't commit to helping you with that today due to time constraints.

Good luck.

Best Regards,
Todd
 

Thank you,
Adrian


Todd Deshane wrote:
>
>
> On Tue, Jul 8, 2008 at 6:54 AM, Adrian Turcu <adriant@xxxxxxxxxx
> <mailto:adriant@xxxxxxxxxx>> wrote:
>
>     Hi all
>
>     I was browsing the archives to find a solution to my "problem" but
>     with no luck.
>     Here is the scenario:
>
>     Host:
>     Hardware: Dell PE 1950, 4 x dual core CPU, 16GB RAM
>     OS: FC8, kernel 2.6.21-2950.fc8xen
>     Xen version: 3.1.0-rc7-2950.fc8
>
>     Guests:
>     OS: FC8, kernel 2.6.21-2950.fc8xen
>
>     I want to be able during guest instance creation to pin down each of
>     the VCPUs to specific CPU cores.
>     I can do that after the instance is up by using "xm vcpu-pin"
>     command, but I would love to be able to do it straight from the
>     config file.
>
>
>
> I would suggest this thread:
> http://markmail.org/search/?q=xen-devel+ian+pratt+cpu+pin+syntax#query:xen-devel%20ian%20pratt%20cpu%20pin%20syntax+page:1+mid:2vlhnty3zemednba+state:results
>
> Take a look at the syntax with the ^
>
> Hope that helps,
> Todd
>
>
>
>
>     two config files:
>
>     ### shared-db4
>     kernel = "/boot/vmlinuz-2.6.21-2950.fc8xen"
>     ramdisk = "/boot/initrd-2.6.21-2950.fc8xen-domU.img"
>     name = "shared-db4"
>     memory = 8192
>     cpus = "4,5"
>     vcpus = 2
>     vif = [ 'mac=00:16:3E:13:02:01, bridge=br162',
>     'mac=00:16:3E:13:04:01, bridge=br164' ]
>     disk = [
>     'phy:disk/by-path/ip-nas01-681:3260-iscsi-iqn.2008-02.com.newbay.celerra.domu-root-lun-0-part1,hda1,r'
>     ,
>     'phy:disk/by-path/ip-nas01-681:3260-iscsi-iqn.2008-02.com.newbay.celerra.domu-00163e130001-lun-0-part1,hdb1,w'
>     ,
>     'phy:disk/by-path/ip-nas01-681:3260-iscsi-iqn.2008-02.com.newbay.celerra.domu-00163e130001-lun-1-part1,hdc1,w'
>     ,
>     'phy:disk/by-path/ip-nas01-681:3260-iscsi-iqn.2008-02.com.newbay.celerra.domu-00163e130001-lun-2-part1,hdd1,w'
>     ]
>     root = "/dev/hda1 ro"
>     extra = "3 selinux=0 enforcing=0"
>     > >     on_reboot   = 'restart'
>     on_crash    = 'restart'
>
>
>
>     ### shared-smq6
>     kernel = "/boot/vmlinuz-2.6.21-2950.fc8xen"
>     ramdisk = "/boot/initrd-2.6.21-2950.fc8xen-domU.img"
>     name = "shared-smq6"
>     memory = 2560
>     cpus = "1,2"
>     vcpus = 2
>     vif = [ 'mac=00:16:3E:13:03:03, bridge=br163' ]
>     disk = [
>     'phy:disk/by-path/ip-nas01-681:3260-iscsi-iqn.2008-02.com.newbay.celerra.domu-root-lun-0-part1,hda1,r'
>     ,
>     'phy:disk/by-path/ip-nas01-681:3260-iscsi-iqn.2008-02.com.newbay.celerra.domu-00163e130003-lun-0-part1,hdb1,w'
>     ,
>     'phy:disk/by-path/ip-nas01-681:3260-iscsi-iqn.2008-02.com.newbay.celerra.domu-00163e130003-lun-1-part1,hdc1,w'
>     ,
>     'phy:disk/by-path/ip-nas01-681:3260-iscsi-iqn.2008-02.com.newbay.celerra.domu-00163e130003-lun-2-part1,hdd1,w'
>     ]
>     root = "/dev/hda1 ro"
>     extra = "3 selinux=0 enforcing=0"
>     > >     on_reboot   = 'restart'
>     on_crash    = 'restart'
>
>
>     "xm vcpu-list" output:
>     Name                              ID  VCPU   CPU State   Time(s) CPU
>     Affinity
>     Domain-0                           0     0     0   r--  118567.5 any cpu
>     Domain-0                           0     1     -   --p       2.9 any cpu
>     Domain-0                           0     2     -   --p      30.4 any cpu
>     Domain-0                           0     3     -   --p       2.2 any cpu
>     Domain-0                           0     4     -   --p       3.2 any cpu
>     Domain-0                           0     5     -   --p       2.0 any cpu
>     Domain-0                           0     6     -   --p       2.0 any cpu
>     Domain-0                           0     7     -   --p       3.8 any cpu
>     shared-db4                         6     0     4   r--  446383.3 4
>     shared-db4                         6     1     5   -b-   89830.3 5
>     shared-smq4                        2     0     6   -b-   53710.6 6-7
>     shared-smq4                        2     1     6   -b-   87263.8 6-7
>     shared-smq6                        5     0     1   -b-   21681.7 1-2
>     shared-smq6                        5     1     1   -b-   31198.6 1-2
>
>     shared-db4 was altered after instance creation by using "xm vcpu-pin
>     shared-db4 0 4 ; xm vcpu-pin shared-db4 1 5",
>     the rest of the guests are as they were created using "xm create
>     <config file>" command or automatically started at host reboot
>     (/etc/xen/auto folder).
>
>     Don't know if this has an impact or not, but I am using sedf
>     scheduler and I have a cron job which sets weight=1 for all newly
>     created instances:
>     #!/bin/bash
>
>     # change weigth to 1
>     /usr/sbin/xm sched-sedf | grep -v Name | tr -s ' ' | cut -d\  -f7,1
>     | while read a b ; do if [ $b -eq 0 ] ; then /usr/sbin/xm sched-sedf
>     $a -w1 ; fi ; done
>
>
>     The reason:
>
>     I can see in the guest domains a lot of percentage spent in "CPU
>     Steal" column
>     when the systems are under heavy CPU pressure.
>     Changing the CPU affinity on each VCPU seem to keep "CPU steal" in
>     the guests to almost 0 during similar system loads.
>
>     I also came across this old article (maybe still valid):
>
>     http://virt.kernelnewbies.org/ParavirtBenefits
>
>     which in particular states:
>
>     "The time spent waiting for a physical CPU is never billed against a
>     process,
>     allowing for accurate performance measurement even when there is CPU
>     time contention between *multiple virtual machines*.
>
>     The amount of time the virtual machine slowed down due to such CPU
>     time contention is split out as so-called "steal time"
>     in /proc/stat and properly displayed in tools like vmstat(1), top(1)
>     and sar(1)."
>
>     Is this because the CPU affinity is shared with Domain-0?
>     Maybe I am mixing stuff here, nevertheless, I'd like to be able to
>     pin each VCPU to a physical CPU core (if that makes sense).
>
>
>     Thank you in advance,
>     Adrian
>
>
>     _______________________________________________
>     Xen-users mailing list
>     Xen-users@xxxxxxxxxxxxxxxxxxx <mailto:Xen-users@xxxxxxxxxxxxxxxxxxx>
>     http://lists.xensource.com/xen-users
>
>
>
>
> --
> Todd Deshane
> http://todddeshane.net
> check out our book: http://runningxen.com





--
Todd Deshane
http://todddeshane.net
check out our book: http://runningxen.com
_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.