[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 4/6] libxc: expose xsaves/xgetbv1/xsavec to hvm guest



Ok, Thanks Jan.
I will add the descriptions in next version.

Below is the short descriptions.
For CPUID with eax=0xd and ecx=0x1, ebx\ecx\edx may not be zero when xsaves 
supported. Also with ecx>2, ecx\edx may not be zero. If we want expose xsaves 
to HVM guest , we should not set them to zero. 

So in your opinions ,is it proper to add these code here?

Thanks 

-----Original Message-----
From: Jan Beulich [mailto:JBeulich@xxxxxxxx] 
Sent: Friday, July 17, 2015 3:48 PM
To: Ruan, Shuai
Cc: andrew.cooper3@xxxxxxxxxx; Ian.Campbell@xxxxxxxxxx; wei.liu2@xxxxxxxxxx; 
ian.jackson@xxxxxxxxxxxxx; stefano.stabellini@xxxxxxxxxxxxx; Dong, Eddie; 
Nakajima, Jun; Tian, Kevin; xen-devel@xxxxxxxxxxxxx; keir@xxxxxxx
Subject: Re: [PATCH 4/6] libxc: expose xsaves/xgetbv1/xsavec to hvm guest

>>> On 17.07.15 at 09:26, <shuai.ruan@xxxxxxxxx> wrote:
> @@ -247,8 +250,7 @@ static void xc_cpuid_config_xsave(
>          regs[1] = 512 + 64; /* FP/SSE + XSAVE.HEADER */
>          break;
>      case 1: /* leaf 1 */
> -        regs[0] &= XSAVEOPT;
> -        regs[1] = regs[2] = regs[3] = 0;

This deletion as well as ...

> +        regs[0] &= (XSAVEOPT | XSAVEC | XGETBV1 | XSAVES);
>          break;
>      case 2 ... 63: /* sub-leaves */
>          if ( !(xfeature_mask & (1ULL << input[1])) ) @@ -256,8 +258,6 
> @@ static void xc_cpuid_config_xsave(
>              regs[0] = regs[1] = regs[2] = regs[3] = 0;
>              break;
>          }
> -        /* Don't touch EAX, EBX. Also cleanup ECX and EDX */
> -        regs[2] = regs[3] = 0;

... this one need explaining in the description. And in this latter case the 
comment should probably be retained/amended if the code deletion is really 
intended/warranted.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.