[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v3 1/2] libxc/xentrace: Replace xc_tbuf_set_cpu_mask with CPU mask with xc_cpumap_t instead of uint32_t



On 06/20/2014 08:33 PM, Konrad Rzeszutek Wilk wrote:
@@ -906,6 +949,23 @@ static int parse_evtmask(char *arg)
      return 0;
  }

+static int parse_cpumask(const char *arg)
+{
+    xc_cpumap_t map;
+    uint32_t v, i;
+
+    map = malloc(sizeof(uint32_t));
+    if ( !map )
+        return -ENOMEM;
+
+    v = argtol(arg, 0);
+    for ( i = 0; i < sizeof(uint32_t); i++ )
+        map[i] = (v >> (i * 8)) & 0xff;
+
+    opts.cpu_mask = map;

Sorry for not noticing this befori. If I'm reading this right, if someone sets the cpumask as a hex, then opts.cpu_mask will point to an area of memory only 32 bits long. However, up in set_cpu_mask(), it always calls xc_tbuf_set_cpu_mask() with bits equal to xc_get_max_cpus().

On systems with more than 32 logical cpus, won't this cause a buffer overrun? Should you be calling xc_cpumap_alloc() here instead of malloc()?


@@ -937,7 +997,12 @@ static void parse_args(int argc, char **argv)
              break;

          case 'c': /* set new cpu mask for filtering*/
-            opts.cpu_mask = argtol(optarg, 0);
+            /* Set opts.cpu_mask later as we don't have 'xch' set yet. */
+            if ( parse_cpumask(optarg) )

Nit: You seem to be parsing it now. :-)

 -George


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.