[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-devel] [PATCH v2 2/2] vmx/hap: optimize CR4 trapping
There a bunch of bits in CR4 that should be allowed to be set directly by the guest without requiring Xen intervention, currently this is already done by passing through guest writes into the CR4 used when running in non-root mode, but taking an expensive vmexit in order to do so. xenalyze reports the following when running a PV guest in shim mode: CR_ACCESS 3885950 6.41s 17.04% 3957 cyc { 2361| 3378| 7920} cr4 3885940 6.41s 17.04% 3957 cyc { 2361| 3378| 7920} cr3 1 0.00s 0.00% 3480 cyc { 3480| 3480| 3480} *[ 0] 1 0.00s 0.00% 3480 cyc { 3480| 3480| 3480} cr0 7 0.00s 0.00% 7112 cyc { 3248| 5960|17480} clts 2 0.00s 0.00% 4588 cyc { 3456| 5720| 5720} After this change this turns into: CR_ACCESS 12 0.00s 0.00% 9972 cyc { 3680|11024|24032} cr4 2 0.00s 0.00% 17528 cyc {11024|24032|24032} cr3 1 0.00s 0.00% 3680 cyc { 3680| 3680| 3680} *[ 0] 1 0.00s 0.00% 3680 cyc { 3680| 3680| 3680} cr0 7 0.00s 0.00% 9209 cyc { 4184| 7848|17488} clts 2 0.00s 0.00% 8232 cyc { 5352|11112|11112} Note that this optimized trapping is currently only applied to guests running with HAP on Intel hardware. If using shadow paging more CR4 bits need to be unconditionally trapped, which makes this approach unlikely to yield any important performance improvements. Reported-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx> Signed-off-by: Roger Pau Monné <roger.pau@xxxxxxxxxx> --- Cc: Jun Nakajima <jun.nakajima@xxxxxxxxx> Cc: Kevin Tian <kevin.tian@xxxxxxxxx> Cc: Jan Beulich <jbeulich@xxxxxxxx> Cc: Andrew Cooper <andrew.cooper3@xxxxxxxxxx> Cc: Razvan Cojocaru <rcojocaru@xxxxxxxxxxxxxxx> Cc: Tamas K Lengyel <tamas@xxxxxxxxxxxxx> --- Changes since v1: - Use the mask_cr variable in order to cache the cr4 mask. - Take into account write_ctrlreg_mask when introspection is enabled. --- xen/arch/x86/hvm/vmx/vmx.c | 39 +++++++++++++++++++++++++++++++++++++++ xen/arch/x86/hvm/vmx/vvmx.c | 2 ++ xen/arch/x86/monitor.c | 5 +++-- 3 files changed, 44 insertions(+), 2 deletions(-) diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c index d35cf55982..108f251bb9 100644 --- a/xen/arch/x86/hvm/vmx/vmx.c +++ b/xen/arch/x86/hvm/vmx/vmx.c @@ -1684,6 +1684,36 @@ static void vmx_update_guest_cr(struct vcpu *v, unsigned int cr) } __vmwrite(GUEST_CR4, v->arch.hvm_vcpu.hw_cr[4]); + + if ( !paging_mode_hap(v->domain) ) + /* + * Shadow path has not been optimized because it requires + * unconditionally trapping more CR4 bits, at which point the + * performance benefit of doing this is quite dubious. + */ + v->arch.hvm_vcpu.mask_cr[4] = ~0UL; + else + { + /* + * Update CR4 host mask to only trap when the guest tries to set + * bits that are controlled by the hypervisor. + */ + v->arch.hvm_vcpu.mask_cr[4] = HVM_CR4_HOST_MASK | X86_CR4_PKE | + ~hvm_cr4_guest_valid_bits(v, 0); + v->arch.hvm_vcpu.mask_cr[4] |= v->arch.hvm_vmx.vmx_realmode ? + X86_CR4_VME : 0; + v->arch.hvm_vcpu.mask_cr[4] |= !hvm_paging_enabled(v) ? + (X86_CR4_PSE | X86_CR4_SMEP | + X86_CR4_SMAP) + : 0; + if ( v->domain->arch.monitor.write_ctrlreg_enabled & + monitor_ctrlreg_bitmask(VM_EVENT_X86_CR4) ) + v->arch.hvm_vcpu.mask_cr[4] |= + ~v->domain->arch.monitor.write_ctrlreg_mask[4]; + + } + __vmwrite(CR4_GUEST_HOST_MASK, v->arch.hvm_vcpu.mask_cr[4]); + break; case 2: @@ -3512,6 +3542,15 @@ void vmx_vmexit_handler(struct cpu_user_regs *regs) if ( paging_mode_hap(v->domain) ) { + /* + * Xen allows the guest to modify some CR4 bits directly, update cached + * values to match. + */ + __vmread(GUEST_CR4, &v->arch.hvm_vcpu.hw_cr[4]); + v->arch.hvm_vcpu.guest_cr[4] &= v->arch.hvm_vcpu.mask_cr[4]; + v->arch.hvm_vcpu.guest_cr[4] |= v->arch.hvm_vcpu.hw_cr[4] & + ~v->arch.hvm_vcpu.mask_cr[4]; + __vmread(GUEST_CR3, &v->arch.hvm_vcpu.hw_cr[3]); if ( vmx_unrestricted_guest(v) || hvm_paging_enabled(v) ) v->arch.hvm_vcpu.guest_cr[3] = v->arch.hvm_vcpu.hw_cr[3]; diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c index dfe97b9705..54608e0011 100644 --- a/xen/arch/x86/hvm/vmx/vvmx.c +++ b/xen/arch/x86/hvm/vmx/vvmx.c @@ -1100,6 +1100,8 @@ static void load_shadow_guest_state(struct vcpu *v) cr_read_shadow = (get_vvmcs(v, GUEST_CR4) & ~cr_gh_mask) | (get_vvmcs(v, CR4_READ_SHADOW) & cr_gh_mask); __vmwrite(CR4_READ_SHADOW, cr_read_shadow); + /* Add the nested host mask to the one set by vmx_update_guest_cr. */ + __vmwrite(CR4_GUEST_HOST_MASK, cr_gh_mask | v->arch.hvm_vcpu.mask_cr[4]); /* TODO: CR3 target control */ } diff --git a/xen/arch/x86/monitor.c b/xen/arch/x86/monitor.c index f229e69948..4317658c56 100644 --- a/xen/arch/x86/monitor.c +++ b/xen/arch/x86/monitor.c @@ -189,10 +189,11 @@ int arch_monitor_domctl_event(struct domain *d, ad->monitor.write_ctrlreg_enabled &= ~ctrlreg_bitmask; } - if ( VM_EVENT_X86_CR3 == mop->u.mov_to_cr.index ) + if ( VM_EVENT_X86_CR3 == mop->u.mov_to_cr.index || + VM_EVENT_X86_CR4 == mop->u.mov_to_cr.index ) { struct vcpu *v; - /* Latches new CR3 mask through CR0 code. */ + /* Latches new CR3 or CR4 mask through CR0 code. */ for_each_vcpu ( d, v ) hvm_update_guest_cr(v, 0); } -- 2.16.1 _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxxx https://lists.xenproject.org/mailman/listinfo/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |