[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] VCPU and CPU Manipulation


  • To: Dustin.Henning@xxxxxxxxxxx
  • From: "Omer Khalid" <Omer.Khalid@xxxxxxx>
  • Date: Wed, 8 Oct 2008 13:31:34 +0200
  • Cc: xen-users list <xen-users@xxxxxxxxxxxxxxxxxxx>
  • Delivery-date: Wed, 08 Oct 2008 04:32:22 -0700
  • Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=message-id:date:from:sender:to:subject:cc:in-reply-to:mime-version :content-type:references:x-google-sender-auth; b=xyPBqDGYlp1pC6zivruKb+a1+NtlK4kb2ycXpHz/f6reffPrEjAoHFcY2ADKa7gjHU XwV33sr7Yu2SnKc6wQCmth/iapm9lzHlAXGu0z4BTt5fsRlCPQxrdBfsPUlg6U1/UQEa zkBL7Q1hovf57CmlL6dbMaBKKa1fps/AKEHj4=
  • List-id: Xen user discussion <xen-users.lists.xensource.com>

Hi Dustin,

Thanks for your best wishes with my on going research.  

My eventual aim is to run 3 domU's with dom0 assigned on the last core to observer performance of the job execution. By looking at the CreditScheduler Xen Twiki, it seems, as you also pointed out earlier, that SMP load balancing only comes into play when CPU's are not pinned. And modifying /etc/rc.local is a clever way of pinning dom0 at the boot time.

But its NOT clear to me yet that how the Weight/Cap parameters will influence the domU resource utilization especially in the when they are pinned to a CPU; I assume there should not be any influence as SMP load balancing is not done but I have not got a clear statement on the Xen twiki.

I am wondering if some one from the Xen dev team could shed some light on it.

Thanks,

Omer


On Tue, Oct 7, 2008 at 1:50 PM, Dustin Henning <Dustin.Henning@xxxxxxxxxxx> wrote:
Omer,
       CPU affinity is which CPUs/cores a VCPU is allowed to use, while the CPU/core in use can vary if the CPU affinity allows for more than one possibility.  That said, if you pin dom0 VCPU0  to a CPU/core of your choice, then it won't change like you have observed, and after thinking about it, I believe I do pin my dom0 VCPUs manually with xm vcpu-pin commands in /etc/rc.local or something.  Additionally, if you only need to run one domU, you could still let dom0 use 3 CPUs/cores without it interfering with the CPU/core assigned to the domU, my example was assuming you would run 3 domUs and the dom0.  I am not exceptionally familiar with scheduling, but I wouldn't think it would come in to play when a CPU/core isn't being shared among multiple dom(0/U)s.  You might want to try to verify that, though, or perhaps someone else on the list can confirm.  Depending on what your research regards, it is important to note that even with PV domUs with isolated CPUs/cores, there will probably be some performance loss, though I should hope it would be negligible in such a configuration.  Good luck with your project,
       Dustin


--------Original Message--------
From: xen-users-bounces@xxxxxxxxxxxxxxxxxxx [mailto:xen-users-bounces@xxxxxxxxxxxxxxxxxxx] On Behalf Of Omer Khalid
Sent: Tuesday, October 07, 2008 06:19
To: Dustin.Henning@xxxxxxxxxxx
Cc: xen-users list
Subject: Re: [Xen-users] VCPU and CPU Manipulation

Hello Dustin,
Thanks a lot for the detailed explaination. It indeed clarified my understanding about VCPU/CPUs.
What I have understood in a nutshell is, please correct me if its wrong, that "what matters is the CPU core and CPU affinity a VCPU is using rather than just the VCPU number; thus all the domains could have the same VCPU #, let's say 0, but as long as they are pinned and restricted to a particular core then they are restricted to only that explicity core".
Its a valid argument that to enhance resource/cpu utilization, one should not bother too much about which core is being used by which domU. This is particulary important in situations where perfomance is the key criteria. But in my project, and for LHC grid computing; its a policy decision that each grid job will be allowed to use only one core per CPU (not because of performance reasons but rather resource accounting reasions). In a non-virtualized environment, this is handled by the Batch system configuration but if the job is executed on a virutal machine, which I am is researching on, then comes the question of core utilization for a VM. Thus I stumbled upon this issue of tinkering with vcpu-set/vcpu-pin.
To achieve this, i first modified that /etc/xen/xend-config and restricted the dom0 to use only one CPU. Now my dom0 is only using one CPU while all other dom0 instances for each core are in --p "paused state" with no CPU allocated to them. Then I launched my VM with the modified config file which had vcpus=1, cpus="0". xm list shows the following:
[root@~]# xm vcpu-list
Name        ID     VCPUs    CPU    State     Time(s)    CPU Affinity
CernVM     1       0            0         -b-          11.3         0
Domain-0   0       0            3         r--           111.5       any cpu
Domain-0   0       1            -          --p          12.1         any cpu
Domain-0   0       2            -          --p          5.3           any cpu
Domain-0   0       3            -          --p          2.7           any cpu
I had arrived the same state earlier by using vcpu-pin/vcpu-set but following the above process (as you advised too) its much simpler and cleaner. Interestingly, I observed that before the launching the domU, dom0 was using CPU 2 and later on it switched to CPU 3. But I guess that's OK as its not using CPU 0.
So the VCPU/CPU is sorted out, what about the scheduling of these CPUs either using sedf or credit-scheduler? Once a domU is restricted to one core, then I wanted to further optimize its performance by modifying its weight using credit-scheduler as the application to be run in the domU is memory/cpu intensive.
Thanks,
Omer

On Mon, Oct 6, 2008 at 2:51 PM, Dustin Henning <Dustin.Henning@xxxxxxxxxxx> wrote:
Omer,
      See my response following your initial post.

--------Original Message--------
From: xen-users-bounces@xxxxxxxxxxxxxxxxxxx [mailto:xen-users-bounces@xxxxxxxxxxxxxxxxxxx] On Behalf Of Omer Khalid
Sent: Monday, October 06, 2008 04:51
To: xen-users list
Subject: [Xen-users] VCPU and CPU Manipulation
Hi,
I have a dual core SMP machine ( in total 4 cores). I have been trying to set restrict vcpu/cpus for my domU to one core/one vpcu but it have not fully worked. As there are two commands "xm vcpu-set" and "xm vcpu-pin". By using these commands, i have observed that the sequence in which they are used plays a role. e.g. I have the following state in the beginning:
[root@lxb ~]# xm vcpu-list
Name         ID     VCPUs CPU   State Time(s)      CPU Affinity
====         ==     ===== ====  ==== ======     === =====
Domain-0    0       0          3        r--      5593.4        any cpu
Domain-0    0       1          1        -b-     15361.9      any cpu
Domain-0    0       2          0        -b-     10137.5      any cpu
Domain-0    0       3          -         --p     78.9           any cpu
test_lxb      20     0          2         -b-     21169.0     any cpu
What I want to achieve is that my domU (test_lxb) uses one VCPU pinned to one CPU. In the above state, both my domU and dom0 are using VCPU 0 (which is pinned to use either CPU 3 or 2.) After few "vcpu-set" and "vcpu-pin", I reach the following stage where dom0 is pinned to CPU 3 and domU (test_lxb) is pinned to CPU 2:
[root@lxb ~]# xm vcpu-list
Name ID VCPUs CPU State Time(s) CPU Affinity
Name         ID     VCPUs CPU   State Time(s)      CPU Affinity
====         ==     ===== ====  ==== ======     ========
Domain-0    0       0         3        r--      5600.4            3
Domain-0    0       1         3        -b-     15372.5           3
Domain-0    0       2         3        -b-     10140.0           3
Domain-0    0       3         -         --p     78.9                3
test_lxb      20     0         2         -b-     21169.5           2
But domU is still using VCPU 0 which is also used by my domU; now i would like to restrict VCPU 0 to CPU 2 only for domU only...I am wondering how to achieve this last mile?
Any ideas? Thanks for you help in advance!
Regards
--
Omer

-------------------------------------------------------
CERN -- European Organization for Nuclear
Research, IT Department, CH-1211,
Geneva 23, Switzerland
      You have misinterpreted the meaning VCPU numbers.  VCPU 0 is the first virtual CPU for any domain, VCPU 1 is the second virtual CPU for any domain, etcetera.  Additional single VCPU domUs will have a VCPU 0 as well.  Each VCPU 0 is actually a separate VCPU; they are all identified as CPU 0 to a different domain, and the VCPU identification just tells you what the domU sees them as (minus the V).  CPU indicates which CPU/core a VCPU is currently using, and CPU Affinity indicates which ones it is allowed to use.  Furthermore, for performance reasons, if you want Dom0 to only use one CPU/core, you should assign it only one VCPU (which will be 0, so for what you are trying to do, you probably ultimately want output more like this):

Name            ID      VCPU    CPU     State   Time(s) CPU Affinity
Domain-0        0       0       0       r--     5600.4  0
test_lxb        1       0       1       -b-     21169.5 1
test_abc        2       0       2       -b-     21169.5 2
test_def        3       0       3       -b-     21169.5 3

      Obviously state and time will be variable.  Additionally, which core/cpu is used for which domain shouldn't matter much.  Regarding getting to this state, the number of VCPUs dom0 has initially (and which CPUs/cores they use) is configurable (probably /etc/xen/xend-config).  The same is true for domUs.  That said, see the example configs in /etc/xen for more info on how to do this, but you should be able to cause each domU to start up with the CPU/core you want it to use, and then you won't really need to use vcpu-set or vcpu-pin.  Finally, if I don't bring it up, someone else probably will, the idea behind virtualization is to better use available processing power.  With that in mind, your domUs may not each need their own full CPU/core.  (For instance, I have a quad-core with four HVMs that have one vcpu each, where each uses a separate core, and then my dom0 has four VCPUs, where each uses a separate core; even this isn't by any means fully utilizing the hardware, but I am more concerned with maintaining optimal performance of my HVMs).  Good luck with your project,
      Dustin


_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users



--
Omer

-------------------------------------------------------
CERN -- European Organization for Nuclear
Research, IT Department, CH-1211,
Geneva 23, Switzerland

Phone: +41 (0) 22 767 2224
Fax:     +41 (0) 22 766 8683
E-mail : Omer.Khalid@xxxxxxx
Homepage: http://cern.ch/Omer.Khalid



_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users



--
Omer

-------------------------------------------------------
CERN -- European Organization for Nuclear
Research, IT Department, CH-1211,
Geneva 23, Switzerland

Phone: +41 (0) 22 767 2224
Fax:     +41 (0) 22 766 8683
E-mail : Omer.Khalid@xxxxxxx
Homepage: http://cern.ch/Omer.Khalid
_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.