[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] [PATCH v7] Sanity check xsave area when migrating or restoring from older Xen verions



Xen 4.3.0, 4.2.3 and older transferred a maximum sized xsave area (as
if all the available XCR0 bits were set); the new version only
transfers based on the actual XCR0 bits. This may result in a smaller
area if the last sections were missing (e.g., the LWP area from an AMD
machine). If the size doesn't match the XCR0 derived size, the size is
checked against the maximum size and the part of the xsave area
between the actual and maximum used size is checked for zero data. If
either the max size check or any part of the overflow area is
non-zero, we return with an error.

Signed-off-by: Don Koch <dkoch@xxxxxxxxxxx>
---
Changes in V7:
- Note more specific older versions
- Correct copy length

Changes since v6:
- Error message shuffle.

Changes in V5:
- Back to one message before non-zero check, on in case of non-zero
- Change vcpuid format to %u
- Change offsets to %#x
- Added better comment regarding bug (using Andrew's suggestion)

Changes in V4:
- Removed check of size base on xfeature_mask.
- Unsign some ints.
- Change %d to %u for unsigned ints.
- Move printk to only print if non-zero data found.

Changes in V3:
- use h->data for zero check
- remove max size check (use size that was sent)
- fix error message (drop first byte value)
- fix "for" issues

Changes in V2:
- Add check for size.
- Add check for non-zero data in unused part of block.

 xen/arch/x86/hvm/hvm.c | 32 ++++++++++++++++++++------------
 1 file changed, 20 insertions(+), 12 deletions(-)

diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index f0e1edc..8d27128 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -1971,6 +1971,7 @@ static int hvm_load_cpu_xsave_states(struct domain *d, 
hvm_domain_context_t *h)
     struct vcpu *v;
     struct hvm_hw_cpu_xsave *ctxt;
     struct hvm_save_descriptor *desc;
+    unsigned int i, desc_start;
 
     /* Which vcpu is this? */
     vcpuid = hvm_load_instance(h);
@@ -2011,15 +2012,8 @@ static int hvm_load_cpu_xsave_states(struct domain *d, 
hvm_domain_context_t *h)
                         save_area) + XSTATE_AREA_MIN_SIZE);
         return -EINVAL;
     }
-    size = HVM_CPU_XSAVE_SIZE(xfeature_mask);
-    if ( desc->length > size )
-    {
-        printk(XENLOG_G_WARNING
-               "HVM%d.%d restore mismatch: xsave length %u > %u\n",
-               d->domain_id, vcpuid, desc->length, size);
-        return -EOPNOTSUPP;
-    }
     h->cur += sizeof (*desc);
+    desc_start = h->cur;
 
     ctxt = (struct hvm_hw_cpu_xsave *)&h->data[h->cur];
     h->cur += desc->length;
@@ -2038,10 +2032,24 @@ static int hvm_load_cpu_xsave_states(struct domain *d, 
hvm_domain_context_t *h)
     size = HVM_CPU_XSAVE_SIZE(ctxt->xcr0_accum);
     if ( desc->length > size )
     {
+        /*
+         * Xen 4.3.0, 4.2.3 and older used to send longer-than-needed
+         * xsave regions.  Permit loading the record if the extra data
+         * is all zero.
+         */
+        for ( i = size; i < desc->length; i++ )
+        {
+            if ( h->data[desc_start + i] )
+            {
+                printk(XENLOG_G_WARNING
+                       "HVM%d.%u restore mismatch: xsave length %#x > %#x 
(non-zero data at %#x)\n",
+                       d->domain_id, vcpuid, desc->length, size, i);
+                return -EOPNOTSUPP;
+            }
+        }
         printk(XENLOG_G_WARNING
-               "HVM%d.%d restore mismatch: xsave length %u > %u\n",
+               "HVM%d.%u restore mismatch: xsave length %#x > %#x\n",
                d->domain_id, vcpuid, desc->length, size);
-        return -EOPNOTSUPP;
     }
     /* Checking finished */
 
@@ -2049,8 +2057,8 @@ static int hvm_load_cpu_xsave_states(struct domain *d, 
hvm_domain_context_t *h)
     v->arch.xcr0_accum = ctxt->xcr0_accum;
     if ( ctxt->xcr0_accum & XSTATE_NONLAZY )
         v->arch.nonlazy_xstate_used = 1;
-    memcpy(v->arch.xsave_area, &ctxt->save_area,
-           desc->length - offsetof(struct hvm_hw_cpu_xsave, save_area));
+    memcpy(v->arch.xsave_area, &ctxt->save_area, min(desc->length, size)
+           - offsetof(struct hvm_hw_cpu_xsave, save_area));
 
     return 0;
 }
-- 
1.8.3.1


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.