[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-changelog] [xen-unstable] [XEND] Apply the domain cpumask fully to every vcpu in the domain.



# HG changeset patch
# User kfraser@xxxxxxxxxxxxxxxxxxxxx
# Node ID 438ed1c4b3916058a183d6c8e731566d2f4ca1da
# Parent  9ced76fd7d9bdb10fd2e7934dc355aba5d3b5ef6
[XEND] Apply the domain cpumask fully to every vcpu in the domain.

Apply the domain cpumask to each vcpu rather than mapping vcpus to
cpus in the list.  This is more inline with the comments for the
cpus parameter and also allows the credit scheduler to balance
vcpus within the domain cpumask.

Signed-off-by: Ryan Harper <ryanh@xxxxxxxxxx>
---
 tools/python/xen/xend/XendDomainInfo.py |    7 ++-----
 1 files changed, 2 insertions(+), 5 deletions(-)

diff -r 9ced76fd7d9b -r 438ed1c4b391 tools/python/xen/xend/XendDomainInfo.py
--- a/tools/python/xen/xend/XendDomainInfo.py   Tue Aug 15 10:47:26 2006 +0100
+++ b/tools/python/xen/xend/XendDomainInfo.py   Tue Aug 15 10:56:59 2006 +0100
@@ -1272,12 +1272,9 @@ class XendDomainInfo:
             # repin domain vcpus if a restricted cpus list is provided
             # this is done prior to memory allocation to aide in memory
             # distribution for NUMA systems.
-            cpus = self.info['cpus']
-            if cpus is not None and len(cpus) > 0:
+            if self.info['cpus'] is not None and len(self.info['cpus']) > 0:
                 for v in range(0, self.info['max_vcpu_id']+1):
-                    # pincpu takes a list of ints
-                    cpu = [ int( cpus[v % len(cpus)] ) ]
-                    xc.vcpu_setaffinity(self.domid, v, cpu)
+                    xc.vcpu_setaffinity(self.domid, v, self.info['cpus'])
 
             # set domain maxmem in KiB
             xc.domain_setmaxmem(self.domid, self.info['maxmem'] * 1024)

_______________________________________________
Xen-changelog mailing list
Xen-changelog@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-changelog


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.