[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] [PATCH] [Linux] ia64, xencomm: fix XEN_SYSCTL_cpupool_op



Hi,

This patch is applied to linux-2.6.18-xen.hg.

Because the cpumap member of struct xen_sysctl_cpupool_op is used only
when the operation is XEN_SYSCTL_CPUPOOL_OP_INFO or
XEN_SYSCTL_CPUPOOL_OP_FREEINFO, in case of others, xencomm_map to
cpumap fails, thus XEN_SYSCTL_cpupool_op fails.

This patch fixes it.

Signed-off-by: KUWAMURA Shin'ya <kuwa@xxxxxxxxxxxxxx>
-- 
  KUWAMURA Shin'ya
# HG changeset patch
# User KUWAMURA Shin'ya <kuwa@xxxxxxxxxxxxxx>
# Date 1283495302 -32400
# Node ID 800fb02afdce9f0bde4a95c9c1e6d97f1dc27313
# Parent  9b1adfb8b0b3b37c13f06c0adb8dd17b2a0a077d
ia64, xencomm: fix XEN_SYSCTL_cpupool_op

Because the cpumap member of struct xen_sysctl_cpupool_op is used only
when the operation is XEN_SYSCTL_CPUPOOL_OP_INFO or
XEN_SYSCTL_CPUPOOL_OP_FREEINFO, in case of others, xencomm_map to
cpumap fails, thus XEN_SYSCTL_cpupool_op fails.

This patch fixes it.

Signed-off-by: KUWAMURA Shin'ya <kuwa@xxxxxxxxxxxxxx>

diff -r 9b1adfb8b0b3 -r 800fb02afdce arch/ia64/xen/xcom_privcmd.c
--- a/arch/ia64/xen/xcom_privcmd.c      Thu Aug 26 11:27:25 2010 +0100
+++ b/arch/ia64/xen/xcom_privcmd.c      Fri Sep 03 15:28:22 2010 +0900
@@ -283,6 +283,9 @@ xencomm_privcmd_sysctl(privcmd_hypercall
        }
 
        case XEN_SYSCTL_cpupool_op:
+               if (kern_op.u.cpupool_op.op != XEN_SYSCTL_CPUPOOL_OP_INFO &&
+                   kern_op.u.cpupool_op.op != XEN_SYSCTL_CPUPOOL_OP_FREEINFO)
+                       break;
                desc = xencomm_map(
                        xen_guest_handle(kern_op.u.cpupool_op.cpumap.bitmap),
                        ROUND_DIV(kern_op.u.cpupool_op.cpumap.nr_cpus, 8));
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.