[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Fwd: [Xen-devel] [Patch 4/4] Refining Xsave/Xrestore support


  • To: xen-devel@xxxxxxxxxxxxxxxxxxx, Keir Fraser <keir.fraser@xxxxxxxxxxxxx>, Jan Beulich <jbeulich@xxxxxxxxxx>
  • From: Haitao Shan <maillists.shan@xxxxxxxxx>
  • Date: Thu, 28 Oct 2010 10:32:39 +0800
  • Cc:
  • Delivery-date: Wed, 27 Oct 2010 19:34:23 -0700
  • Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type:content-transfer-encoding; b=je4gp+0j/pmF0C+VTiFNNvOoYHDCCZmWTwj9n2gwb41pvosk6CoN8SBsjgL3DNqCCl 5JEYK5d48e0h+a7XoyZ+fF9evLdmdHtVF1caSnGOoCSiMscsUdaHElqt7avA/dpbfgzM AaiBCi8nam0quqwP+XMo5DBA7H8LKihDhwk84=
  • List-id: Xen developer discussion <xen-devel.lists.xensource.com>

Sorry, I forget to reply to all in my reply to Deegan.

Shan Haitao


---------- Forwarded message ----------
From: Haitao Shan <maillists.shan@xxxxxxxxx>
Date: 2010/10/28
Subject: Re: [Xen-devel] [Patch 4/4] Refining Xsave/Xrestore support
To: Tim Deegan <Tim.Deegan@xxxxxxxxxx>


Hi, Deegan,

Thanks for your review. Please see my reply embedded.

2010/10/27 Tim Deegan <Tim.Deegan@xxxxxxxxxx>:
> Hi,
>
> Thanks for this - good to see XSAVE save/restore working.  I've no
> comments on the tools part of this patch; it looks plausible but I
> haven't reviewed it closely.
>
> On the Xen HVM side:
>
>> diff -r 9bf6b4030d70 xen/arch/x86/hvm/hvm.c
>> --- a/xen/arch/x86/hvm/hvm.c  Wed Oct 27 21:55:45 2010 +0800
>> +++ b/xen/arch/x86/hvm/hvm.c  Wed Oct 27 22:17:24 2010 +0800
>> @@ -575,8 +575,13 @@ static int hvm_save_cpu_ctxt(struct doma
>>          vc = &v->arch.guest_context;
>>
>>          if ( v->fpu_initialised )
>> -            memcpy(ctxt.fpu_regs, &vc->fpu_ctxt, sizeof(ctxt.fpu_regs));
>> -        else
>> +            if ( cpu_has_xsave )
>> +                /* to restore guest img saved on xsave-incapable host */
>> +                memcpy(v->arch.xsave_area, ctxt.fpu_regs,
>> +                       sizeof(ctxt.fpu_regs));
>> +            else
>> +                memcpy(&vc->fpu_ctxt, ctxt.fpu_regs, sizeof(ctxt.fpu_regs));
>
> I think this hunk belongs in hvm_LOAD_cpu_ctxt()!
I once did the same as you said. But doing this in hvm_load_cpu_ctxt
will depends on two:
1. hvm_load_cpu_ctxt can not be executed before xsave restore routine
is executed. Otherwise, xsave_area contains no useful data at the time
of copying.
2. It seems to break restore when HVM guest (no touching eXtended
States at all) saved on a Xsave-capable host is later restored on a
Xsave-incapable host.
So I moved this part of code to hvm_save_cpu_ctxt, so that inside
guest save images FPU context is consistent.

>
>> +        else
>>              memset(ctxt.fpu_regs, 0, sizeof(ctxt.fpu_regs));
>>
>>          ctxt.rax = vc->user_regs.eax;
>
> [...]
>
>> +    ctxt = (struct hvm_hw_cpu_xsave *)&h->data[h->cur];
>> +    h->cur += desc->length;
>> +
>> +    _xfeature_mask = ctxt->xfeature_mask;
>> +    if ( (_xfeature_mask & xfeature_mask) != xfeature_mask )
>> +        return -EINVAL;
>
> This allows XSAVE records to be loaded on machines with fewer features.
> Is that safe?
Oh, my mistake I will fix that.

>
>> +    v->arch.xcr0 = ctxt->xcr0;
>> +    v->arch.xcr0_accum = ctxt->xcr0_accum;
>> +    memcpy(v->arch.xsave_area, &ctxt->save_area, xsave_cntxt_size);
>> +
>> +    return 0;
>> +}
>
> Also, have you tested this on CPUs that don't support XSAVE?  The PV
> hypercall looks like it will return -EFAULT after trying to
> copy_from_user into a null pointer on the Xen side, but something more
> explicit would be better.
Sure. I will add that in my updated patch.

>
> Cheers,
>
> Tim.
>
> --
> Tim Deegan <Tim.Deegan@xxxxxxxxxx>
> Principal Software Engineer, XenServer Engineering
> Citrix Systems UK Ltd.  (Company #02937203, SL9 0BG)
>

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.