[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] tools: Fix wild memory allocations from c/s 250f0b4 and 85d78b4



On 05/18/2015 10:09 AM, Andrew Cooper wrote:
On 18/05/15 15:00, Boris Ostrovsky wrote:
On 05/18/2015 08:57 AM, Andrew Cooper wrote:
These changesets cause the respective libxc functions to unconditonally
dereference their max_cpus/nodes parameters as part of initial memory
allocations.  It will fail at obtaining the correct number of
cpus/nodes from
Xen, as the guest handles will not be NULL.

Signed-off-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
CC: Ian Campbell <Ian.Campbell@xxxxxxxxxx>
CC: Ian Jackson <Ian.Jackson@xxxxxxxxxxxxx>
CC: Wei Liu <wei.liu2@xxxxxxxxxx>
CC: Boris Ostrovsky <boris.ostrovsky@xxxxxxxxxx>

---
Spotted by XenServers Coverity run.
---
   tools/libxl/libxl.c               |    4 ++--
   tools/misc/xenpm.c                |    4 ++--
   tools/python/xen/lowlevel/xc/xc.c |    4 ++--
   3 files changed, 6 insertions(+), 6 deletions(-)
xenpm bug is already fixed (commit
b315cd9cce5b6da7ca89b2d7bad3fb01e7716044 n the staging tree).

I am not sure I understand why Coverity complains about other spots.
For example, in libxl_get_cpu_topology() num_cpus can be left
uninitialized only if xc_cputopoinfo(ctx->xch, &num_cpus, NULL) fails,
in which case we go to 'GC_FREE;  return ret;', so it's not ever used.
xc_cputopoinfo(ctx->xch, &num_cpus, NULL) unconditionally dereferences
and reads &num_cpus, and performs a memory allocation based on the result.

Ah, OK. xc_cputopoinf() (or, rather, the hypervisor) actually doesn't use the value of dereferenced num_cpus in this case but obviously Coverity can't know about this.

So Coverity cross-checks routines to see how callers use the arguments?

-boris



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.