[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [BUGFIX][PATCH 4/4] hvm_save_one: allow the 2nd instance to be fetched for PIC.



>>> On 12.12.13 at 01:56, Don Slutz <dslutz@xxxxxxxxxxx> wrote:
> @@ -114,20 +111,20 @@ int hvm_save_one(struct domain *d, uint16_t typecode, 
> uint16_t instance,
>      }
>      else
>      {
> -        uint32_t off;
> +        uint32_t off, add;

"add" is a pretty odd name for what this is being used for. Why
don't you use desc->length directly?

>  
>          rv = -EBADSLT;
> -        for (off = 0; off < ctxt.cur; off += hvm_sr_handlers[typecode].size) 
> {
> +        for (off = 0; off < ctxt.cur; off += add + sizeof (struct 
> hvm_save_descriptor)) {
>              struct hvm_save_descriptor *desc
>                     = (struct hvm_save_descriptor *)&ctxt.data[off];
> +            add = desc->length;
>              if (instance == desc->instance) {
>                  rv = 0;
>                  if ( copy_to_guest(handle,
>                                     ctxt.data
>                                     + off
>                                     + sizeof (struct hvm_save_descriptor),
> -                                   hvm_sr_handlers[typecode].size
> -                                   - sizeof (struct hvm_save_descriptor)) )
> +                                   add) )

Further, the way this was done before means that for multi-
instance records all records would get copied out, but the
caller would have no way of knowing that (except by implying
that behavior, e.g. "known to be the case for PIC").

Which shows another shortcoming of the interface: There's no
buffer size being passed in from the caller, yet we have variable
size records. Which means latent data corruption in the caller.

Hence I think rather than complicating the logic here, we should
change the interface to pass a size in and back out, which will
not only avoid corrupting memory, but also allow the guest to
recognize multi-instance data being returned.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.