[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH 2/2] libxl: libxl__spawn_qdisk_backend has to close opened files on error
On Fri, 2013-11-22 at 12:45 +0100, Roger Pau Monnà wrote: > On 21/11/13 19:50, Ian Jackson wrote: > > Ian Jackson writes ("Re: [PATCH 2/2] libxl: libxl__spawn_qdisk_backend has > > to close opened files on error"): > >> Roger Pau Monne writes ("[PATCH 2/2] libxl: libxl__spawn_qdisk_backend has > >> to close opened files on error"): > >>> Coverity-ID: 1130517 and 1130518 > >>> Signed-off-by: Roger Pau Monnà <roger.pau@xxxxxxxxxx> > >> > >> I'm don't think that's the right fix. I think the fds are leaked in > >> the success case too. How about this ? > > > > Maybe you'd prefer a version which at least compiles... > > > > Ian. > > > > From: Ian Jackson <ian.jackson@xxxxxxxxxxxxx> > > Date: Thu, 21 Nov 2013 18:37:16 +0000 > > Subject: [PATCH v2] libxl: libxl__spawn_qdisk_backend closes fds > > > > This function needs to close both null and logfile_w on both error and > > normal exits. (The child gets its own copy during the fork, and the > > parent doesn't need them any more.) > > > > Use the standard initialise-to-unallocated, always-free style. As a > > result the label "error" becomes "out", and only makes the callback if > > rc is nonzero. > > > > Signed-off-by: Ian Jackson <Ian.Jackson@xxxxxxxxxxxxx> > > Coverity-ID: 1130517 and 1130518 > > Cc: Roger Pau Monne <roger.pau@xxxxxxxxxx> > > Cc: Ian Campbell <ian.campbell@xxxxxxxxxx> > > --- > > tools/libxl/libxl_dm.c | 19 ++++++++++++------- > > 1 file changed, 12 insertions(+), 7 deletions(-) > > > > diff --git a/tools/libxl/libxl_dm.c b/tools/libxl/libxl_dm.c > > index 292e351..548378d 100644 > > --- a/tools/libxl/libxl_dm.c > > +++ b/tools/libxl/libxl_dm.c > > @@ -1343,7 +1343,7 @@ void libxl__spawn_qdisk_backend(libxl__egc *egc, > > libxl__dm_spawn_state *dmss) > > flexarray_t *dm_args; > > char **args; > > const char *dm; > > - int logfile_w, null, rc; > > + int logfile_w = -1, null = -1, rc; > > uint32_t domid = dmss->guest_domid; > > > > /* Always use qemu-xen as device model */ > > @@ -1366,7 +1366,7 @@ void libxl__spawn_qdisk_backend(libxl__egc *egc, > > libxl__dm_spawn_state *dmss) > > logfile_w = libxl__create_qemu_logfile(gc, GCSPRINTF("qdisk-%u", > > domid)); > > if (logfile_w < 0) { > > rc = logfile_w; > > - goto error; > > + goto out; > > } > > null = open("/dev/null", O_RDONLY); > > > > @@ -1393,17 +1393,22 @@ void libxl__spawn_qdisk_backend(libxl__egc *egc, > > libxl__dm_spawn_state *dmss) > > dmss->spawn.detached_cb = device_model_detached; > > rc = libxl__spawn_spawn(egc, &dmss->spawn); > > if (rc < 0) > > - goto error; > > + goto out; > > if (!rc) { /* inner child */ > > setsid(); > > libxl__exec(gc, null, logfile_w, logfile_w, dm, args, NULL); > > } > > > > - return; > > + rc = 0; > > > > -error: > > - assert(rc); > > - dmss->callback(egc, dmss, rc); > > + out: > > + if (logfile_w >= 0) close(logfile_w); > > + if (null >= 0) close(null); > > + > > + /* rc is nonzero iff we had an error; if we had no error then > > + * spawn succeeded and we will continue in a further callback */ > > + if (rc) > > + dmss->callback(egc, dmss, rc); > > I didn't catch that when modifying the original patch to be more similar > to libxl__spawn_local_dm, but IMHO you could use > device_model_spawn_outcome here which will print a nice error message > about the device model is unable to start. Apart from this (that can be > done in a separate patch): > > Acked-by: Roger Pau Monnà <roger.pau@xxxxxxxxxx> The version of the patch at http://bugs.xenproject.org/xen/mid/% 3C21134.22006.356685.408733@xxxxxxxxxxxxxxxxxxxxxxxx%3E/raw (which should be reasonably unmangled) has gotten quoted printable'd somewhere. Would also be good to add the error handling on opening of null as well. Currently on failure there libxl__exec will translate the -1 into STDxxx_FILENO. Ian. _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |