[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] heavy P2M lock contention on guest HPET counter reads



>>> On 30.07.14 at 12:08, <JBeulich@xxxxxxxx> wrote:
> with 40+ vCPU Win2012R2 guests we're observing apparent guest live
> locks. The 'd' debug key reveals more than half of the vCPU-s doing an
> inlined HPET main counter read from KeQueryPerformanceCounter(),
> resulting in all of them racing for the lock at the beginning of
> __get_gfn_type_access(). Assuming it is really necessary to always
> take the write lock (rather than just the read one) here, would it perhaps
> be reasonable to introduce a bypass in hvm_hap_nested_page_fault()
> for the HPET page similar to the LAPIC one?

Like this (for 4.4, if it matters, and with no tester feedback yet),
raising the question whether we shouldn't just do this for all
internal MMIO handlers.

--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -1523,11 +1523,16 @@ int hvm_hap_nested_page_fault(paddr_t gp
         }
     }
 
-    /* For the benefit of 32-bit WinXP (& older Windows) on AMD CPUs,
-     * a fast path for LAPIC accesses, skipping the p2m lookup. */
+    /*
+     * For the benefit of 32-bit WinXP (& older Windows) on AMD CPUs,
+     * a fast path for LAPIC accesses, skipping the p2m lookup.
+     * Similarly for newer Windows (like Server 2012) a fast path for
+     * HPET accesses.
+     */
     if ( !nestedhvm_vcpu_in_guestmode(v)
          && is_hvm_vcpu(v)
-         && gfn == PFN_DOWN(vlapic_base_address(vcpu_vlapic(v))) )
+         && (gfn == PFN_DOWN(vlapic_base_address(vcpu_vlapic(v)))
+             || hpet_mmio_handler.check_handler(v, gpa)) )
     {
         if ( !handle_mmio() )
             hvm_inject_hw_exception(TRAP_gp_fault, 0);



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.