[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [Patch 4/4] Refining Xsave/Xrestore support


  • To: Haitao Shan <maillists.shan@xxxxxxxxx>, Tim Deegan <Tim.Deegan@xxxxxxxxxx>
  • From: Keir Fraser <keir@xxxxxxx>
  • Date: Thu, 28 Oct 2010 14:05:49 +0100
  • Cc: "xen-devel@xxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxx>, Keir Fraser <keir.fraser@xxxxxxxxxxxxx>
  • Delivery-date: Thu, 28 Oct 2010 06:06:48 -0700
  • Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=sender:user-agent:date:subject:from:to:cc:message-id:thread-topic :thread-index:in-reply-to:mime-version:content-type :content-transfer-encoding; b=r2ZqV8AvHEY+tUjnnO53FC8GM8qDfvWp3BJ3NHwRTwyGTsBbIdTBfc5dyI4e58U822 vErADBiqOKLX/3r2WF3w8kiTDRaNhxSYdvBnXye3kvR+f2rHakGM3VJHd1QzJHy3Vlrm 2AevL7LjQg1PjpBz/ojb5VCpm4WDtnX5hs1SU=
  • List-id: Xen developer discussion <xen-devel.lists.xensource.com>
  • Thread-index: Act2oNdZdmINAoo0xE6xHskqTUOkgg==
  • Thread-topic: [Xen-devel] [Patch 4/4] Refining Xsave/Xrestore support

At this point please go apply all requested changes and resubmit the patch
series in its entirety. I've flushed old versions from my queue.

 -- Keir

On 28/10/2010 12:28, "Haitao Shan" <maillists.shan@xxxxxxxxx> wrote:

> OK. I will update the patch according to the policy you described. Thanks!
> 
> Shan Haitao
> 
> 2010/10/28 Tim Deegan <Tim.Deegan@xxxxxxxxxx>:
>> Hi,
>> 
>> At 03:32 +0100 on 28 Oct (1288236759), Haitao Shan wrote:
>>>>> diff -r 9bf6b4030d70 xen/arch/x86/hvm/hvm.c
>>>>> --- a/xen/arch/x86/hvm/hvm.c  Wed Oct 27 21:55:45 2010 +0800
>>>>> +++ b/xen/arch/x86/hvm/hvm.c  Wed Oct 27 22:17:24 2010 +0800
>>>>> @@ -575,8 +575,13 @@ static int hvm_save_cpu_ctxt(struct doma
>>>>>          vc = &v->arch.guest_context;
>>>>> 
>>>>>          if ( v->fpu_initialised )
>>>>> -            memcpy(ctxt.fpu_regs, &vc->fpu_ctxt, sizeof(ctxt.fpu_regs));
>>>>> -        else
>>>>> +            if ( cpu_has_xsave )
>>>>> +                /* to restore guest img saved on xsave-incapable host */
>>>>> +                memcpy(v->arch.xsave_area, ctxt.fpu_regs,
>>>>> +                       sizeof(ctxt.fpu_regs));
>>>>> +            else
>>>>> +                memcpy(&vc->fpu_ctxt, ctxt.fpu_regs,
>>>>> sizeof(ctxt.fpu_regs));
>>>> 
>>>> I think this hunk belongs in hvm_LOAD_cpu_ctxt()!
>>> I once did the same as you said. But doing this in hvm_load_cpu_ctxt
>>> will depends on two:
>>> 1. hvm_load_cpu_ctxt can not be executed before xsave restore routine
>>> is executed. Otherwise, xsave_area contains no useful data at the time
>>> of copying.
>> 
>> OK; then you should copy the other way in in the xsave load routine as
>> well.  Xsave load will always happen after the CPU load since save
>> records are always written in increasing order of type.
>> 
>> That way, if the save file has no xsave record, the new domain's xsave
>> state is initalized from the fpu record, and if it does then the fpu
>> state is initialized from the xsave record.  I think that's the
>> behaviour you want.
>> 
>> In any case this is *definitely* wrong where it is because the memcpy
>> arguments are the wrong way round. :)
>> 
>>> 2. It seems to break restore when HVM guest (no touching eXtended
>>> States at all) saved on a Xsave-capable host is later restored on a
>>> Xsave-incapable host.
>> 
>> That not a safe thing to do anyway -- once you've told the guest (via
>> CPUID) that XSAVE is available you can't migrate it to a host where it's
>> not supported.
>> 
>> Cheers,
>> 
>> Tim.
>> 
>> --
>> Tim Deegan <Tim.Deegan@xxxxxxxxxx>
>> Principal Software Engineer, XenServer Engineering
>> Citrix Systems UK Ltd.  (Company #02937203, SL9 0BG)
>> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.