[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 1/4] x86/hvm: allow guest to use clflushopt and clwb



On 30/12/2015 02:16, Haozhong Zhang wrote:
On 12/30/15 09:35, Haozhong Zhang wrote:
On 12/29/15 15:46, Andrew Cooper wrote:
On 29/12/2015 11:31, Haozhong Zhang wrote:
Pass CPU features CLFLUSHOPT and CLWB into HVM domain so that those two
instructions can be used by guest.

The specification of above two instructions can be found in
https://software.intel.com/sites/default/files/managed/0d/53/319433-022.pdf

Signed-off-by: Haozhong Zhang <haozhong.zhang@xxxxxxxxx>
Please be aware that my cpuid rework series completely changes all of this
code.   As this patch is small and self contained, it would be best to get
it accepted early and for me to rebase over the result.

I'll split this patch series into two parts and put these two
instruction enabling patches in the first part.

As part of my cpuid work, I had come to the conclusion that CLFLUSHOPT, CLWB
and PCOMMIT were all safe for all guests to use, as they deemed safe for
cpl3 code to use.  Is there any reason why these wouldn't be safe for PV
guests to use?

Not for safety concern. These three instructions are usually used with
NVDIMM which are only implemented for HVM domains in this patch
series, so I didn't enable them for PV. I think they can be enabled
for PV later by another patch.

---
  tools/libxc/xc_cpufeature.h      | 3 ++-
  tools/libxc/xc_cpuid_x86.c       | 4 +++-
  xen/arch/x86/hvm/hvm.c           | 7 +++++++
  xen/include/asm-x86/cpufeature.h | 5 +++++
  4 files changed, 17 insertions(+), 2 deletions(-)

diff --git a/tools/libxc/xc_cpufeature.h b/tools/libxc/xc_cpufeature.h
index c3ddc80..5288ac6 100644
--- a/tools/libxc/xc_cpufeature.h
+++ b/tools/libxc/xc_cpufeature.h
@@ -140,6 +140,7 @@
  #define X86_FEATURE_RDSEED      18 /* RDSEED instruction */
  #define X86_FEATURE_ADX         19 /* ADCX, ADOX instructions */
  #define X86_FEATURE_SMAP        20 /* Supervisor Mode Access Protection */
-
+#define X86_FEATURE_CLFLUSHOPT  23 /* CLFLUSHOPT instruction */
+#define X86_FEATURE_CLWB        24 /* CLWB instruction */
  #endif /* __LIBXC_CPUFEATURE_H */
diff --git a/tools/libxc/xc_cpuid_x86.c b/tools/libxc/xc_cpuid_x86.c
index 8882c01..fecfd6c 100644
--- a/tools/libxc/xc_cpuid_x86.c
+++ b/tools/libxc/xc_cpuid_x86.c
@@ -426,7 +426,9 @@ static void xc_cpuid_hvm_policy(xc_interface *xch,
                          bitmaskof(X86_FEATURE_RDSEED)  |
                          bitmaskof(X86_FEATURE_ADX)  |
                          bitmaskof(X86_FEATURE_SMAP) |
-                        bitmaskof(X86_FEATURE_FSGSBASE));
+                        bitmaskof(X86_FEATURE_FSGSBASE) |
+                        bitmaskof(X86_FEATURE_CLWB) |
+                        bitmaskof(X86_FEATURE_CLFLUSHOPT));
          } else
              regs[1] = 0;
          regs[0] = regs[2] = regs[3] = 0;
The entry for CLFLUSHOPT in the ISA Extension manual (August 2015) talks
about CPUID.7(ECX=1).EBX[8:15] indicating the cache line size affected by
the instruction. However, I can't find any other reference to this
information, nor an extension of the CPUID instruction in the ISA manual.
Should the Xen cpuid handling code be updated not to clobber this?

Yes, I missed this part and will update in the next version.

I double-checked the manual and it says that

  "The aligned cache line size affected is also indicated with the
   CPUID instruction (bits 8 through 15 of the EBX register when the
   initial value in the EAX register is 1)"

so I guess you really meant CPUID.1.EBX[8:15]. The 0x00000001 case
branch in xc_cpuid_hvm_policy() (and its callers) has already passed
the host CPUID.1.EBX[8:15] to HVM domains, so no more action is needed
in this patch.

Oops sorry.  Yes - I misread the paragraph in the manual.

Apologies for the noise.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.