[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH] xen-netback: correctly check failed allocation
On Fri, Oct 16, 2015 at 10:05:21AM +0100, Wei Liu wrote: > On Thu, Oct 15, 2015 at 02:02:47PM -0400, Insu Yun wrote: > > I changed patch with valid format. > > > > On Thu, Oct 15, 2015 at 2:02 PM, Insu Yun <wuninsu@xxxxxxxxx> wrote: > > > > > Since vzalloc can be failed in memory pressure, > > > writes -ENOMEM to xenstore to indicate error. > > > > > > Signed-off-by: Insu Yun <wuninsu@xxxxxxxxx> > > > --- > > > drivers/net/xen-netback/xenbus.c | 6 ++++++ > > > 1 file changed, 6 insertions(+) > > > > > > diff --git a/drivers/net/xen-netback/xenbus.c > > > b/drivers/net/xen-netback/xenbus.c > > > index 929a6e7..56ebd82 100644 > > > --- a/drivers/net/xen-netback/xenbus.c > > > +++ b/drivers/net/xen-netback/xenbus.c > > > @@ -788,6 +788,12 @@ static void connect(struct backend_info *be) > > > /* Use the number of queues requested by the frontend */ > > > be->vif->queues = vzalloc(requested_num_queues * > > > sizeof(struct xenvif_queue)); > > > + if (!be->vif->queues) { > > > + xenbus_dev_fatal(dev, -ENOMEM, > > > + "allocating queues"); > > > + return; > > > > > > > I didn't use goto err, because another error handling is not required > > > > It's recommended in kernel coding style to use "goto" style error > handling. I personally prefer that to arbitrary return in function body, > too. > > It's not a matter of whether another error handling is required or not, > it's about cleaner code that is easy to reason about and consistent > coding style. > > The existing code is not perfect, but that doesn't mean we should follow > bad example. And to be clear, I don't want to block this patch just because of this coding style thing. It's still an improvement and fix a real problem. So: Acked-by: Wei Liu <wei.liu2@xxxxxxxxxx> _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |