[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 2/4] XSAVE/XRSTOR: enable guest save/restore



>>> On 31.08.10 at 04:59, "Han, Weidong" <weidong.han@xxxxxxxxx> wrote:
>--- a/xen/include/public/arch-x86/xen.h        Fri Aug 27 12:33:20 2010 -0400
>+++ b/xen/include/public/arch-x86/xen.h        Fri Aug 27 12:36:00 2010 -0400
>@@ -107,11 +107,17 @@ typedef uint64_t tsc_timestamp_t; /* RDT
> 
> /*
>  * The following is all CPU context. Note that the fpu_ctxt block is filled 
>- * in by FXSAVE if the CPU has feature FXSR; otherwise FSAVE is used.
>+ * in by XSAVE if the CPU has feature XSAVE, otherwise use FXSAVE if the CPU
>+ * has feature FXSR; otherwise FSAVE is used.
>  */
> struct vcpu_guest_context {
>-    /* FPU registers come first so they can be aligned for FXSAVE/FXRSTOR. */
>-    struct { char x[512]; } fpu_ctxt;       /* User-level FPU registers     */
>+    /*
>+     * FPU registers come first so they can be aligned for
>+     * FXSAVE/FXRSTOR and XSAVE/XRSTOR. Using 4096 Bytes for
>+     * future state extensions
>+     */
>+    struct { char x[4096]; } fpu_ctxt;
>+
> #define VGCF_I387_VALID                (1<<0)
> #define VGCF_IN_KERNEL                 (1<<2)
> #define _VGCF_i387_valid               0

As Keir already indicated, you can't change the size of this structure.
I'd say that it was a mistake to include the FPU state directly here in
the first place - you'll have to invent a mechanism to (compatibly)
not make this a requirement anymore. E.g. use the reserved part of
fpu_ctxt to store a guest handle referring to the actual area: This
ought to work as (iirc) the structure is used as input only
(VCPUOP_initialize) outside of the tools, and the domctl interface is
allowed to change as long as you don't break compatibility with
stored data (saved guest images).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.