[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [Patch 1/2] support of cpupools in xl: update cpumask handling for cpu pools in libxc and python



On Tue, 2010-10-05 at 14:50 +0100, Juergen Gross wrote:
> >
> > This will leak the previous value of ptr if realloc() fails. You need to
> > do:
> >       tmp = realloc(ptr, ....)
> >       if (!tmp) {
> >               free(ptr);
> >               LIBXL__LOG_ERRNO(...);
> >               return NULL;
> >       }
> >       ptr = tmp;
> >
> >
> 
> Should be changed in other places, too:
> libxc/xc_tmem.c
> libxl/libxl.c (sometimes not even checked for error)

Undoubtedly, but lets not make things worse ;-)

I've added this to my todo list...

> >> diff -r 71f836615ea2 tools/python/xen/lowlevel/xc/xc.c
> >> --- a/tools/python/xen/lowlevel/xc/xc.c Fri Sep 24 15:54:39 2010 +0100
> >> +++ b/tools/python/xen/lowlevel/xc/xc.c Fri Oct 01 09:03:17 2010 +0200
> >> @@ -241,7 +241,7 @@ static PyObject *pyxc_vcpu_setaffinity(X
> >>       if ( xc_physinfo(self->xc_handle,&info) != 0 )
> >>           return pyxc_error_to_exception(self->xc_handle);
> >>
> >> -    nr_cpus = info.nr_cpus;
> >> +    nr_cpus = info.max_cpu_id + 1;
> >>
> >>       size = (nr_cpus + cpumap_size * 8 - 1)/ (cpumap_size * 8);
> >>       cpumap = malloc(cpumap_size * size);
> >
> > Is this (and the equivalent in getinfo) an independent bug fix for a
> > pre-existing issue or does it somehow relate to the rest of the changes?
> > I don't see any corresponding change to xc_vcpu_setaffinity is all.
> 
> It's an independent fix. OTOH it's cpumask related, so I put it in...
> xc_vcpu_setaffinity is not touched as it takes the cpumask size as
> parameter.

Please separate unrelated fixes into their own patches. Not least
because it allows you to accurately changelog them.

Thanks,
Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.