[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2] libxc: fix leak of t_info in xc_tbuf_get_size()



On Thu, 2016-02-11 at 09:52 +0000, Ian Campbell wrote:
> On Thu, 2016-02-11 at 14:02 +0530, Harmandeep Kaur wrote:
>
> > diff --git a/tools/libxc/xc_tbuf.c b/tools/libxc/xc_tbuf.c
> > index 695939a..d96cc67 100644
> > --- a/tools/libxc/xc_tbuf.c
> > +++ b/tools/libxc/xc_tbuf.c
> > @@ -70,11 +70,13 @@ int xc_tbuf_get_size(xc_interface *xch,
> > unsigned long
> > *size)
> > ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂsysctl.u.tbuf_op.buffer_mfn);
> > Â
> > ÂÂÂÂÂif ( t_info == NULL || t_info->tbuf_size == 0 )
> > -ÂÂÂÂÂÂÂÂreturn -1;
> > +ÂÂÂÂÂÂÂÂrc = -1;
> > +ÂÂÂÂelse
> > +   *size = t_info->tbuf_size;
> > Â
> > -ÂÂÂÂ*size = t_info->tbuf_size;
> > +ÂÂÂÂxenforeignmemory_unmap(xch->fmem, t_info, *size);
> 
> *size could be uninitialised here (in the error path) and even in the
> success case I don't think t_info->tbus_size is the right argument
> here, it
> needs to be the size which was passed to the map function, i.e.
> sysctl.u.tbuf_op.size.
> 
And I think both are issues with the current code, and, more important,
not what Coverity is complaining about in the referenced CID?

To be clear, I'm not arguing that they're not issues we should fix (I
don't know about the tbuf_size vs. tbuf_op.size, but I can check). I'm
genuinely asking whether we should do that here, as compared to in a
pre or follow up patch, and let this one be the one that placates
Coverity on that particular issue.

Regards,
Dario
-- 
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)

Attachment: signature.asc
Description: This is a digitally signed message part

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.