[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2] Sanity check xsave area when migrating or restoring from older Xen verions



On 17/10/2014 18:11, Don Koch wrote:
Xen 4.3 and older transferred a maximum sized xsave area (as if all
the available XCR0 bits were set); the new version only transfers
based on the actual XCR0 bits. This may result in a smaller area if
the last sections were missing (e.g., the LWP area from an AMD
machine). If the size doesn't match the XCR0 derived size, the size is
checked against the maximum size and the part of the xsave area
between the actual and maximum used size is checked for zero data. If
either the max size check or any part of the overflow area is
non-zero, we return with an error.

Signed-off-by: Don Koch <dkoch@xxxxxxxxxxx>
---
  xen/arch/x86/hvm/hvm.c | 29 ++++++++++++++++++++++++++++-
  1 file changed, 28 insertions(+), 1 deletion(-)

diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index f0e1edc..bdebc67 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -1971,6 +1971,8 @@ static int hvm_load_cpu_xsave_states(struct domain *d, 
hvm_domain_context_t *h)
      struct vcpu *v;
      struct hvm_hw_cpu_xsave *ctxt;
      struct hvm_save_descriptor *desc;
+    u32 eax, ebx, ecx, edx;
+    uint8_t *overflow_start;
/* Which vcpu is this? */
      vcpuid = hvm_load_instance(h);
@@ -2041,8 +2043,32 @@ static int hvm_load_cpu_xsave_states(struct domain *d, 
hvm_domain_context_t *h)
          printk(XENLOG_G_WARNING
                 "HVM%d.%d restore mismatch: xsave length %u > %u\n",
                 d->domain_id, vcpuid, desc->length, size);

This warning should be elided if we pass the zero-check.

-        return -EOPNOTSUPP;
+
+        /* Check to see if the xsave_area is the maximum size.
+           If so, it is likely the save is from an older xen. */
+        cpuid_count(XSTATE_CPUID, 0, &eax, &ebx, &ecx, &edx);

This check is bogus for heterogeneous hardware. We have no way of calculating what the maximum xsave area size was on the sending side should have been...

+        if ( desc->length !=
+             ecx + offsetof(struct hvm_hw_cpu_xsave, save_area) ) {
+            printk(XENLOG_G_WARNING
+                   "HVM%d.%d restore mismatch: xsave length %u != %u\n",
+                   d->domain_id, vcpuid, desc->length, ecx +
+                   (u32)offsetof(struct hvm_hw_cpu_xsave, save_area));
+            return -EOPNOTSUPP;
+        }
+
+        /* Make sure unused bytes are all zero. */

...which is fine, as we literally only care whether the overflow data is all zeros of not.

It is is all empty, we really don't care how large it was supposed to be before. If it is not empty, then something is certainly corrupt in this record.

+        overflow_start = (uint8_t *)&ctxt->save_area -
+                           offsetof(struct hvm_hw_cpu_xsave, save_area);

I am having a hard time working out whether this expression is correct. As ctxt is superimposed on top of h->data, would it not make sense to check h->data between the expected end, and the real length? h->data is even the correct type for doing this with.

+        for (int i = size; i < desc->length; i++)

Style

+        {
+            if ( *(overflow_start + i) )
+            {
+                printk(XENLOG_G_WARNING
+                       "HVM%d.%d restore mismatch: xsave[%d] has non-zero data: 
%x\n",
+                       d->domain_id, vcpuid, i, *(overflow_start +i));

I don't think it is useful to print the value of the first non-zero byte. I would reduce it just to "junk found in overflow space".

~Andrew

+                return -EOPNOTSUPP;
+            }
+        }
      }
      /* Checking finished */


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.