[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] VT-d/RMRR: Avoid memory corruption in add_user_rmrr()



Jan,

Sure. I will look in to it.

Venu


> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@xxxxxxxx]
> Sent: Monday, January 30, 2017 04:39 AM
> To: Andrew Cooper; Elena Ufimtseva; Venu Busireddy
> Cc: Xen-devel
> Subject: Re: [PATCH] VT-d/RMRR: Avoid memory corruption in add_user_rmrr()
> 
> >>> On 30.01.17 at 11:10, <andrew.cooper3@xxxxxxxxxx> wrote:
> > register_one_rmrr() already frees its parameter if errors are
> encountered.
> >
> > Introduced by c/s 431685e8de and spotted by Coverity.
> >
> > Coverity-ID: 1399607
> > Signed-off-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
> 
> Reviewed-by: Jan Beulich <jbeulich@xxxxxxxx>
> 
> I notice, however, that register_one_rmrr() returning success
> doesn't always mean success, so in non-debug builds we may be
> left without any log message here despite there being a problem
> with what the user specified. Elena, Venu, can you look into this
> please? Perhaps the function should return a positive value in
> that case, so that the original caller can retain its current behavior
> but the newly added caller can be adjusted?
> 
> Jan
> 
> > --- a/xen/drivers/passthrough/vtd/dmar.c
> > +++ b/xen/drivers/passthrough/vtd/dmar.c
> > @@ -975,13 +975,9 @@ static int __init add_user_rmrr(void)
> >          rmrr->scope.devices_cnt = user_rmrrs[i].dev_count;
> >
> >          if ( register_one_rmrr(rmrr) )
> > -        {
> >              printk(XENLOG_ERR VTDPREFIX
> >                     "Could not register RMMR range "ERMRRU_FMT"\n",
> >                     ERMRRU_ARG(user_rmrrs[i]));
> > -            scope_devices_free(&rmrr->scope);
> > -            xfree(rmrr);
> > -        }
> >      }
> >
> >      return 0;
> > --
> > 2.1.4
> 
> 
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.