[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [Patch] adjust the cpu-affinity to more than 64 cpus
Jan Beulich wrote: > >>>> "James (song wei)" <jsong@xxxxxxxxxx> 17.03.10 09:56 >>> >>--- a/tools/python/xen/lowlevel/xc/xc.c Mon Mar 15 17:08:29 2010 +0000 >>+++ b/tools/python/xen/lowlevel/xc/xc.c Wed Mar 17 16:51:07 2010 +0800 >>@@ -215,35 +215,54 @@ >> { >> uint32_t dom; >> int vcpu = 0, i; >>- uint64_t cpumap = ~0ULL; >>+ uint64_t *cpumap; >> PyObject *cpulist = NULL; >>+ int nr_cpus, size; >>+ xc_physinfo_t info; >>+ xc_cpu_to_node_t map[1]; >>+ uint64_t cpumap_size = sizeof(cpumap); > > Perhaps sizeof(*cpumap)? > > -- Yeahïyou are right. > >>... >>+ *(cpumap + cpu / (cpumap_size * 8)) |= (uint64_t)1 << (cpu % > (cpumap_size * 8)); > > Using [] here and in similar places further down would likely make these > constructs a little bit more legible. > --yes. > >>@@ -362,7 +381,11 @@ >> uint32_t dom, vcpu = 0; >> xc_vcpuinfo_t info; >> int rc, i; >>- uint64_t cpumap; >>+ uint64_t *cpumap; >>+ int nr_cpus, size; >>+ xc_physinfo_t pinfo = { 0 }; >>+ xc_cpu_to_node_t map[1]; >>+ uint64_t cpumap_size = sizeof(cpumap); > > Same as above. > >>@@ -385,17 +421,18 @@ >> "cpu", info.cpu); >> >> cpulist = PyList_New(0); >>- for ( i = 0; cpumap != 0; i++ ) >>+ for ( i = 0; i < size * cpumap_size * 8; i++ ) > > Why not simply use nr_cpus here? > --Yes, copy amount of nr_cpus bitsare enough here. > > Jan, thank you very much! I'll post the new patch here soon. > > -Jame (Song Wei) > > > _______________________________________________ > Xen-devel mailing list > Xen-devel@xxxxxxxxxxxxxxxxxxx > http://lists.xensource.com/xen-devel > > -- View this message in context: http://old.nabble.com/-Patch--adjust-the-cpu-affinity-to-more-than-64-cpus-tp27928229p27941020.html Sent from the Xen - Dev mailing list archive at Nabble.com. _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |