[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v3] Sanity check xsave area when migrating or restoring from older Xen verions



On Tue, 21 Oct 2014 15:41:03 +0100
Jan Beulich <JBeulich@xxxxxxxx> wrote:

> >>> On 21.10.14 at 16:25, <dkoch@xxxxxxxxxxx> wrote:
> > On Tue, 21 Oct 2014 09:57:18 +0100
> > Jan Beulich <JBeulich@xxxxxxxx> wrote:
> >> >>> On 20.10.14 at 22:40, <dkoch@xxxxxxxxxxx> wrote:
> >> > @@ -2020,6 +2021,7 @@ static int hvm_load_cpu_xsave_states(struct domain 
> >> > *d, 
> > 
> >> > hvm_domain_context_t *h)
> >> >          return -EOPNOTSUPP;
> >> >      }
> >> >      h->cur += sizeof (*desc);
> >> > +    overflow_start = h->cur;
> >> >  
> >> >      ctxt = (struct hvm_hw_cpu_xsave *)&h->data[h->cur];
> >> >      h->cur += desc->length;
> >> > @@ -2041,7 +2043,18 @@ static int hvm_load_cpu_xsave_states(struct 
> >> > domain *d, hvm_domain_context_t *h)
> >> >          printk(XENLOG_G_WARNING
> >> >                 "HVM%d.%d restore mismatch: xsave length %u > %u\n",
> >> >                 d->domain_id, vcpuid, desc->length, size);
> >> > -        return -EOPNOTSUPP;
> >> > +
> >> > +        /* Make sure missing bytes are all zero. */
> >> > +        for ( i = size; i < desc->length; i++ )
> >> > +        {
> >> > +            if ( h->data[overflow_start + i] )
> >> > +            {
> >> > +                printk(XENLOG_G_WARNING
> >> > +                       "HVM%d.%d restore mismatch: xsave has non-zero 
> >> > data starting at %d\n",
> >> > +                       d->domain_id, vcpuid, i);
> >> > +                return -EOPNOTSUPP;
> >> 
> >> You were asked to avoid issuing two messages in this case, and iirc
> >> you indicated you would adjust your patch to do so, yet nothing
> >> really changed here. Also please print offsets in hex (%#x) and
> >> don't further proliferate the wrong format specifier used for vcpuid
> >> into newly added messages: vcpuid is "unsigned int" and hence
> >> wants to be printed using %u.
> > 
> > Sorry, slipped my mind when reformatting. Didn't notice the
> > unsignedness of vcpuid and just copied what was in the other prints;
> > will change.
> > 
> > Actually, the request was to elide the first message if the zero check
> > passed, which I interpret as: if we find a non-zero value, issue both,
> > else issue neither. I can either print both or, if you wish, print a
> > combined message, something along the line of "non-zero data found in
> > oversized buffer..." (which I think I prefer).
> 
> Don't combine them, just add your new check ahead of the existing
> printk().

OK.

> >> And finally iirc you also indicated you'd drop the full-size check
> >> (against HVM_CPU_XSAVE_SIZE(xfeature_mask)), which we
> >> identified to be wrong in case the origin machine had bigger
> >> xsave state than the receiving one.
> > 
> > I dropped MY full size check, the one based on the target machine's idea of
> > full sized.
> > 
> > Do we want to drop the original check, too? I'm assuming you mean the one 
> > that
> > starts out:
> >     size = HVM_CPU_XSAVE_SIZE(xfeature_mask);
> >     if ( desc->length > size )
> > at (or near) 2015. If so, will drop the size assignment and if block.
> 
> Yes, that's the one (and there's no other use of
> HVM_CPU_XSAVE_SIZE(xfeature_mask) in that function, so I
> don't see what could have been ambiguous here). But of course
> remove it only if you _agree_ it is wrong.

I'm not sure it's right, but it doesn't fail in my tests (which, of course,
doesn't mean it's right). I don't think it does anything useful at this point;
so, I'm not against removing it and will do so.

The one anomaly I've wondered about is the point of
ctxt->xfeature_mask; we save it but never use it (outside of a print
statement) on restore. Do we want to compare it against the global
xfeature_mask for sanity (print a warning on mismatch) or just ignore
it. I wouldn't suggest removing it as it would cause another
compatibility issue with migration.

> Jan
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.