[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 2/2] libxl: clean up qemu-save and qemu-resume files



On Wed, Jun 03, 2015 at 10:58:51AM +0100, Ian Campbell wrote:
> On Mon, 2015-06-01 at 18:24 +0100, Wei Liu wrote:
> > These files are leaked when using qemu-trad stubdom.  They are
> > intermediate files created by libxc. Unfortunately they don't fit well
> > in our userdata scheme. Clean them up after we destroy guest, we're
> > sure they are not useful anymore at that point.
> 
> Could this be done in the parent process at some point following
> domain_destroy_domid_cb or domain_destroy_cb perhaps?
> 

Yes, of course.

> I think we don't want to do things in sub processes which don't need to
> be, just to keep things simpler, and I think the logging is more
> reliable too.
> 
> 
> > 
> > Signed-off-by: Wei Liu <wei.liu2@xxxxxxxxxx>
> > ---
> >  tools/libxl/libxl.c | 9 +++++++++
> >  1 file changed, 9 insertions(+)
> > 
> > diff --git a/tools/libxl/libxl.c b/tools/libxl/libxl.c
> > index 9117b01..ad2290d 100644
> > --- a/tools/libxl/libxl.c
> > +++ b/tools/libxl/libxl.c
> > @@ -1686,6 +1686,15 @@ static void devices_destroy_cb(libxl__egc *egc,
> >  
> >          rc = xc_domain_destroy(ctx->xch, domid);
> >          if (rc < 0) goto badchild;
> > +        /* Clean up qemu-save and qemu-resume files. They are
> > +         * intermediate files created by libxc. Unfortunately they
> > +         * don't fit in existing userdata scheme very well.
> > +         */
> > +        rc = libxl__remove_file(gc, libxl__device_model_savefile(gc, 
> > domid));
> > +        if (rc < 0) goto badchild;
> > +        rc = libxl__remove_file(gc,
> > +                 GCSPRINTF(XC_DEVICE_MODEL_RESTORE_FILE".%u", domid));
> > +        if (rc < 0) goto badchild;
> >          _exit(0);
> >  
> >      badchild:
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.