[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-changelog] [xen master] x86/vpmu_intel: handle SMT consistently for programmable and fixed counters



commit 9f5390441a6e55990e6ae78e51fd800e55fb9637
Author:     Mohit Gambhir <mohit.gambhir@xxxxxxxxxx>
AuthorDate: Fri Apr 7 12:03:46 2017 +0200
Commit:     Jan Beulich <jbeulich@xxxxxxxx>
CommitDate: Fri Apr 7 12:03:46 2017 +0200

    x86/vpmu_intel: handle SMT consistently for programmable and fixed counters
    
    The patch introduces a macro FIXED_CTR_CTRL_ANYTHREAD_MASK and uses it
    to mask .Anythread bit for all counter in IA32_FIXED_CTR_CTRL MSR in all
    versions of Intel Arhcitectural Performance Monitoring.  Masking .AnyThread 
bit
     is necesssry for two reasons:
    
    1. We need to be consistent in the implementation. We disable .Anythread 
bit in
    programmable counters (regardless of the version) by masking bit 21 in
    IA32_PERFEVTSELx.  (See code snippet below from vpmu_intel.c)
    
     /* Masks used for testing whether and MSR is valid */
     #define ARCH_CTRL_MASK  (~((1ull << 32) - 1) | (1ull << 21))
    
    But we leave it enabled in fixed function counters for version 3. Removing 
the
    condition disables the bit in fixed function counters regardless of the 
version,
    which is consistent with what is done for programmable counters.
    
    2. We don't want to expose event counts from another guest (or hypervisor)
    which can happen if .AnyThread bit is not masked and a VCPU is only 
scheduled
    to run on one of the hardware threads in a hyper-threaded CPU.
    
    Also, note that Intel SDM discourages the  use of .AnyThread bit in 
virtualized
     environments (per section 18.2.3.1 AnyThread Counting and Software 
Evolution).
    
    Signed-off-by: Mohit Gambhir <mohit.gambhir@xxxxxxxxxx>
    Reviewed-by: Jan Beulich <jbeulich@xxxxxxxx>
    Reviewed-by: Kevin Tian <kevin.tian@xxxxxxxxx>
---
 xen/arch/x86/cpu/vpmu_intel.c | 9 +++++++--
 1 file changed, 7 insertions(+), 2 deletions(-)

diff --git a/xen/arch/x86/cpu/vpmu_intel.c b/xen/arch/x86/cpu/vpmu_intel.c
index 0d66ecb..3f0322c 100644
--- a/xen/arch/x86/cpu/vpmu_intel.c
+++ b/xen/arch/x86/cpu/vpmu_intel.c
@@ -73,6 +73,7 @@ static bool_t __read_mostly full_width_write;
  */
 #define FIXED_CTR_CTRL_BITS 4
 #define FIXED_CTR_CTRL_MASK ((1 << FIXED_CTR_CTRL_BITS) - 1)
+#define FIXED_CTR_CTRL_ANYTHREAD_MASK 0x4
 
 #define ARCH_CNTR_ENABLED   (1ULL << 22)
 
@@ -946,6 +947,7 @@ int __init core2_vpmu_init(void)
 {
     u64 caps;
     unsigned int version = 0;
+    unsigned int i;
 
     if ( current_cpu_data.cpuid_level >= 0xa )
         version = MASK_EXTR(cpuid_eax(0xa), PMU_VERSION_MASK);
@@ -979,8 +981,11 @@ int __init core2_vpmu_init(void)
     full_width_write = (caps >> 13) & 1;
 
     fixed_ctrl_mask = ~((1ull << (fixed_pmc_cnt * FIXED_CTR_CTRL_BITS)) - 1);
-    if ( version == 2 )
-        fixed_ctrl_mask |= 0x444;
+    /* mask .AnyThread bits for all fixed counters */
+    for( i = 0; i < fixed_pmc_cnt; i++ )
+       fixed_ctrl_mask |=
+           (FIXED_CTR_CTRL_ANYTHREAD_MASK << (FIXED_CTR_CTRL_BITS * i));
+
     fixed_counters_mask = ~((1ull << core2_get_bitwidth_fix_count()) - 1);
     global_ctrl_mask = ~((((1ULL << fixed_pmc_cnt) - 1) << 32) |
                          ((1ULL << arch_pmc_cnt) - 1));
--
generated by git-patchbot for /home/xen/git/xen.git#master

_______________________________________________
Xen-changelog mailing list
Xen-changelog@xxxxxxxxxxxxx
https://lists.xenproject.org/xen-changelog

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.