[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 2/2] HVM: clean up hvm_save_one()



On Wed, Jun 07, 2017 at 04:07:02AM -0600, Jan Beulich wrote:
> >>> On 06.06.17 at 19:52, <wei.liu2@xxxxxxxxxx> wrote:
> > On Wed, May 31, 2017 at 01:25:26AM -0600, Jan Beulich wrote:
> >> Eliminate the for_each_vcpu() loop and the associated local variables,
> >> don't override the save handler's return code, and correct formatting.
> >> 
> >> Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
> >> 
> >> --- a/xen/common/hvm/save.c
> >> +++ b/xen/common/hvm/save.c
> >> @@ -79,36 +79,27 @@ size_t hvm_save_size(struct domain *d)
> >>  int hvm_save_one(struct domain *d, unsigned int typecode, unsigned int 
> > instance,
> >>                   XEN_GUEST_HANDLE_64(uint8) handle, uint64_t *bufsz)
> >>  {
> >> -    int rv = -ENOENT;
> >> -    size_t sz = 0;
> >> -    struct vcpu *v;
> >> -    hvm_domain_context_t ctxt = { 0, };
> >> +    int rv;
> >> +    hvm_domain_context_t ctxt = { };
> >>      const struct hvm_save_descriptor *desc;
> >>  
> >> -    if ( d->is_dying 
> >> -         || typecode > HVM_SAVE_CODE_MAX 
> >> -         || hvm_sr_handlers[typecode].size < sizeof(*desc)
> >> -         || hvm_sr_handlers[typecode].save == NULL )
> >> +    if ( d->is_dying ||
> >> +         typecode > HVM_SAVE_CODE_MAX ||
> >> +         hvm_sr_handlers[typecode].size < sizeof(*desc) ||
> >> +         !hvm_sr_handlers[typecode].save )
> >>          return -EINVAL;
> >>  
> >> +    ctxt.size = hvm_sr_handlers[typecode].size;
> >>      if ( hvm_sr_handlers[typecode].kind == HVMSR_PER_VCPU )
> >> -        for_each_vcpu(d, v)
> >> -            sz += hvm_sr_handlers[typecode].size;
> >> -    else 
> >> -        sz = hvm_sr_handlers[typecode].size;
> >> -    
> >> -    ctxt.size = sz;
> >> -    ctxt.data = xmalloc_bytes(sz);
> >> +        hvm_sr_handlers[typecode].size *= d->max_vcpus;
> > 
> > Why is size updated with a particular d->max_vcpus here? AFAICT (after
> > going through layers of macros ...) hvm_sr_handlers is global and needed
> > when saving any hvm guests. The "size" field contains the length of one
> > record.
> > 
> > Also, you set ctxt.size before this loop without taking into account the
> > number of vcpus, which looks wrong to me. Shouldn't it be (when not
> > updating hvm_sr_handlers[typecode].size)
> > 
> >    ctxt.size = hvm_sr_handlers[typecode].size * d->max_vcpus
> > 
> > ?
> 
> Right, this is complete rubbish. Should be
> 
> ctxt.size *= d->max_vcpus;
> 

Yes, this looks right to me now.

> >> +    ctxt.data = xmalloc_bytes(ctxt.size);
> >>      if ( !ctxt.data )
> >>          return -ENOMEM;
> >>  
> >> -    if ( hvm_sr_handlers[typecode].save(d, &ctxt) != 0 )
> >> -    {
> >> -        printk(XENLOG_G_ERR "HVM%d save: failed to save type %"PRIu16"\n",
> >> -               d->domain_id, typecode);
> >> -        rv = -EFAULT;
> >> -    }
> >> -    else if ( ctxt.cur >= sizeof(*desc) )
> >> +    if ( (rv = hvm_sr_handlers[typecode].save(d, &ctxt)) != 0 )
> >> +        printk(XENLOG_G_ERR "HVM%d save: failed to save type %"PRIu16" 
> >> (%d)\n",
> >> +               d->domain_id, typecode, rv);
> >> +    else if ( rv = -ENOENT, ctxt.cur >= sizeof(*desc) )
> > 
> > I guess the intent here is to set rv while at the same time only test
> > ctxt.cur? But why?
> 
> Well, we can't use -ENOENT as initializer anymore, as rv now is
> being modified above. Before entering the body of the "else if"
> it needs to be -ENOENT though.
> 
> > Can the code be reorganised so that it is easier to reason about.
> 
> It probably could be, at the expense of assigning -ENOENT in two
> places.
> 

How about:

   if ( (rv = hvm_sr_handlers ...)) != 0 )
   {
   } else {
       rv = -ENOENT;

       if ( ctx.cur >= sizeof(*desc) )
       {

       }

   }

> Jan
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.