[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2 05/13] intel/VPMU: Clean up Intel VPMU code



On 09/25/2013 09:55 AM, Jan Beulich wrote:
On 20.09.13 at 11:42, Boris Ostrovsky <boris.ostrovsky@xxxxxxxxxx> wrote:
+void vmx_rm_guest_msr(u32 msr)
+{
+    struct vcpu *curr = current;
+    unsigned int i, idx, msr_count = curr->arch.hvm_vmx.msr_count;
+    struct vmx_msr_entry *msr_area = curr->arch.hvm_vmx.msr_area;
+
+    if ( msr_area == NULL )
+        return;
+
+    for ( idx = 0; idx < msr_count; idx++ )
+        if ( msr_area[idx].index == msr )
+            break;
+
+    if ( idx == msr_count )
+        return;
+
+    for ( i = idx; i < msr_count - 1; i++ )
"idx" not being used further down anymore, why do you need
another loop variable here?

+    {
+        msr_area[i].index = msr_area[i + 1].index;
+        rdmsrl(msr_area[i].index, msr_area[i].data);
This is clearly a side effect of the function call no-one would
expect. Why do you do this?

I don't understand what you are trying to say here.

(And this is wrong, instead of rdmsr it should be
        msr_area[i].data = msr_area[i + 1].data;
)


+    }
+    msr_area[msr_count - 1].index = 0;
+
+    curr->arch.hvm_vmx.msr_count = --msr_count;
+    __vmwrite(VM_EXIT_MSR_STORE_COUNT, msr_count);
+    __vmwrite(VM_ENTRY_MSR_LOAD_COUNT, msr_count);
+
+    return;
Pointless "return".

All the same comments apply to vmx_rm_host_load_msr().

+static int arch_pmc_cnt; /* Number of general-purpose performance counters */
+static int fixed_pmc_cnt; /* Number of fixed performance counters */
unsigned? __read_mostly?

@@ -248,13 +230,13 @@ static void core2_vpmu_set_msr_bitmap(unsigned long 
*msr_bitmap)
      int i;
/* Allow Read/Write PMU Counters MSR Directly. */
-    for ( i = 0; i < core2_fix_counters.num; i++ )
+    for ( i = 0; i < fixed_pmc_cnt; i++ )
      {
-        clear_bit(msraddr_to_bitpos(core2_fix_counters.msr[i]), msr_bitmap);
-        clear_bit(msraddr_to_bitpos(core2_fix_counters.msr[i]),
+        clear_bit(msraddr_to_bitpos(MSR_CORE_PERF_FIXED_CTR0 + i), msr_bitmap);
+        clear_bit(msraddr_to_bitpos(MSR_CORE_PERF_FIXED_CTR0 + i),
Dropping the static array will make the handling here quite a bit more
complicated should there ever appear a second dis-contiguous MSR
range.

Fixed counters range should always be contiguous per Intel SDM.

@@ -262,32 +244,37 @@ static void core2_vpmu_set_msr_bitmap(unsigned long 
*msr_bitmap)
      }
/* Allow Read PMU Non-global Controls Directly. */
-    for ( i = 0; i < core2_ctrls.num; i++ )
-        clear_bit(msraddr_to_bitpos(core2_ctrls.msr[i]), msr_bitmap);
-    for ( i = 0; i < core2_get_pmc_count(); i++ )
+    for ( i = 0; i < arch_pmc_cnt; i++ )
          clear_bit(msraddr_to_bitpos(MSR_P6_EVNTSEL0+i), msr_bitmap);
+
+    clear_bit(msraddr_to_bitpos(MSR_CORE_PERF_FIXED_CTR_CTRL), msr_bitmap);
+    clear_bit(msraddr_to_bitpos(MSR_IA32_PEBS_ENABLE), msr_bitmap);
+    clear_bit(msraddr_to_bitpos(MSR_IA32_DS_AREA), msr_bitmap);
As you can see, this is already the case here.

This is a different set of MSRs from from what you've commented on above.

-boris


The patch description doesn't really say _why_ you do this.
Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.