[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v3 1/2] libxc: introduce XC_SAVE_ID_TOOLSTACK



On Tue, 31 Jan 2012, Stefano Stabellini wrote:
> On Mon, 30 Jan 2012, Shriram Rajagopalan wrote:
> > On Mon, Jan 30, 2012 at 8:54 AM, Stefano Stabellini 
> > <stefano.stabellini@xxxxxxxxxxxxx> wrote:
> >       Introduce a new save_id to save/restore toolstack specific extra
> >       information.
> > 
> >       Signed-off-by: Stefano Stabellini <stefano.stabellini@xxxxxxxxxxxxx>
> >       ---
> > � tools/libxc/xc_domain_restore.c | �  32 
> > +++++++++++++++++++++++++++++++-
> > � tools/libxc/xc_domain_save.c �  � | �  17 +++++++++++++++++
> > � tools/libxc/xenguest.h �  �  �  �  � | �  23 
> > ++++++++++++++++++++++-
> > � tools/libxc/xg_save_restore.h �  | �  � 1 +
> > � tools/libxl/libxl_dom.c �  �  �  �  | �  � 2 +-
> > � tools/xcutils/xc_restore.c �  �  � | �  � 2 +-
> > � 6 files changed, 73 insertions(+), 4 deletions(-)
> > 
> > diff --git a/tools/libxc/xc_domain_restore.c 
> > b/tools/libxc/xc_domain_restore.c
> > index 3fda6f8..ead3df4 100644
> > --- a/tools/libxc/xc_domain_restore.c
> > +++ b/tools/libxc/xc_domain_restore.c
> > @@ -45,6 +45,8 @@ struct restore_ctx {
> > �  �  int last_checkpoint; /* Set when we should commit to the current 
> > checkpoint when it completes. */
> > �  �  int compressing; /* Set when sender signals that pages would be 
> > sent compressed (for Remus) */
> > �  �  struct domain_info_context dinfo;
> > + �  � uint8_t *toolstack_data;
> > + �  � uint32_t toolstack_data_len;
> > � };
> > 
> > 
> > This is still leaking speculative state. You have only moved the restore 
> > code but
> > you are still reading the (potentially discardable) state into a global ctx 
> > structure.
> > 
> > Take a look at the tailbuf code right below the pagebuf_get() call in 
> > xc_domain_restore().
> > It reads the tailbuf onto a temporary buffer and then copies it onto the 
> > main one.
> > This way, if an error occurs while receiving data, one can completely 
> > discard the current
> > checkpoint (pagebuf & tmptailbuf) and restore from the previous one 
> > (tailbuf).
> > 
> > You could have a similar *consistent buffer* in the xc_domain_restore 
> > function itself, and copy the toolstack
> > stuff into it before starting to buffer a new checkpoint. Perhaps something 
> > like the snippet below?
> > 
> > + toolstack_data = realloc(pagebuf.toolstack_data_len)
> > + memcpy(toolstack_data, pagebuf.toolstack_data, 
> > pagebuf.toolstack_data_len);
> > if ( ctx->last_checkpoint )
> > 
> > Though, this would incur two reallocs (once in pagebuf_get and once more in 
> > the main loop).
> > 
> > If there is a maximum size for this buffer, I would suggest mallocing it 
> > once and for all, and just
> > memcpy it.
> > 
> 
> Even though the current toolstack data is tiny, I don't want to realloc
> a potentially big buffer twice or memcpy'ing more than necessary.
> I don't want to add a maximum size either.
> What if I add two new arguments to pagebuf_get_one: a pointer to the
> buffer to be malloc'ed and the length? Like it was done with the
> callbacks argument in the previous version of this patch?
> 
> 
> 
> >       @@ -827,6 +829,20 @@ static int pagebuf_get_one(xc_interface *xch, 
> > struct restore_ctx *ctx,
> >       �  �  �  �  }
> >       �  �  �  �  return pagebuf_get_one(xch, ctx, buf, fd, dom);
> > 
> > + �  � case XC_SAVE_ID_TOOLSTACK:
> > + �  �  �  � {
> > + �  �  �  �  �  � RDEXACT(fd, &ctx->toolstack_data_len,
> > + �  �  �  �  �  �  �  �  �  
> > � sizeof(ctx->toolstack_data_len));
> > + �  �  �  �  �  � ctx->toolstack_data = (uint8_t*) 
> > malloc(ctx->toolstack_data_len);
> > + �  �  �  �  �  � if ( ctx->toolstack_data == NULL )
> > + �  �  �  �  �  � {
> > + �  �  �  �  �  �  �  � PERROR("error memory allocation");
> > + �  �  �  �  �  �  �  � return -1;
> > + �  �  �  �  �  � }
> > + �  �  �  �  �  � RDEXACT(fd, ctx->toolstack_data, 
> > ctx->toolstack_data_len);
> > 
> > 
> > Basically this becomes,
> > � RDEXACT(fd, &buf->toolstack_data_len,...)
> > buf->toolstack_data = (uint8_t *)realloc(buf->toolstack_data_len, 0)
> > RDEXACT(fd, buf->toolstack_data, buf->toolstack_data_len)
> > 
> > And please do realloc. Since the pagebuf_get_one code is called repeatedly
> > by the main loop, using malloc would result in loads of memory leaks.
> > 
> 
> I presume that this buf pointer would be an argument passed to
> pagebuf_get_one.
> In this case, I don't suppose I need to memcpy anything anywhere, do I?
> Later on, in finish_hvm, I'll just do:
> 
> 
> 
> �  � if ( callbacks != NULL && callbacks->toolstack_restore != NULL )
> �  � {
> �  �  �  � if ( callbacks->toolstack_restore(dom,
> �  �  �  �  �  �  �  �  �  � buf->toolstack_data, 
> buf->toolstack_data_len,
> �  �  �  �  �  �  �  �  �  � callbacks->data) < 0 )
> �  �  �  � {
> �  �  �  �  �  � PERROR("error calling toolstack_restore");
> �  �  �  �  �  � free(buf->toolstack_data);
>            buf->toolstack_data = NULL;
>            buf->toolstack_data_len = 0;
> �  �  �  �  �  � goto out;
> �  �  �  � }
> �  � }
> �  � free(buf->toolstack_data);
>    buf->toolstack_data = NULL;
>    buf->toolstack_data_len = 0;


Sorry about this, I configured my mailer not to send emails in UTF-8
because they can be the cause of nasty python bugs with spaces and tabs.
Let me write this code snippet again:

    if ( callbacks != NULL && callbacks->toolstack_restore != NULL )
    {
        if ( callbacks->toolstack_restore(dom,
                    buf->toolstack_data, buf->toolstack_data_len,
                    callbacks->data) < 0 )
        {
            PERROR("error calling toolstack_restore");
            free(buf->toolstack_data);
            buf->toolstack_data = NULL;
            buf->toolstack_data_len = 0;
            goto out;
        }
    }
    free(buf->toolstack_data);
    buf->toolstack_data = NULL;
    buf->toolstack_data_len = 0;
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.