[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-devel] [PATCH RFC V3 3/5] xen: Force-enable relevant MSR events; optimize the number of sent MSR events
Vmx_disable_intercept_for_msr() will now refuse to disable interception of MSRs needed for memory introspection. It is not possible to gate this on mem_access being active for the domain, since by the time mem_access does become active the interception for the interesting MSRs has already been disabled (vmx_disable_intercept_for_msr() runs very early on). Changes since V1: - Replaced printk() with gdprintk(XENLOG_DEBUG, ...). Changes since V2: - Split a log line differently to keep it grepable. - Interception for relevant MSRs will be disabled only if mem_access is not enabled. - Since they end up being disabled early on (when mem_access is not enabled yet), re-enable interception when mem_access becomes active. Signed-off-by: Razvan Cojocaru <rcojocaru@xxxxxxxxxxxxxxx> --- xen/arch/x86/hvm/vmx/vmcs.c | 24 ++++++++++++++++++++++++ xen/arch/x86/mm/mem_event.c | 17 +++++++++++++++++ 2 files changed, 41 insertions(+) diff --git a/xen/arch/x86/hvm/vmx/vmcs.c b/xen/arch/x86/hvm/vmx/vmcs.c index 8ffc562..2de6f5a 100644 --- a/xen/arch/x86/hvm/vmx/vmcs.c +++ b/xen/arch/x86/hvm/vmx/vmcs.c @@ -39,6 +39,7 @@ #include <xen/keyhandler.h> #include <asm/shadow.h> #include <asm/tboot.h> +#include <asm/mem_event.h> static bool_t __read_mostly opt_vpid_enabled = 1; boolean_param("vpid", opt_vpid_enabled); @@ -695,11 +696,34 @@ static void vmx_set_host_env(struct vcpu *v) void vmx_disable_intercept_for_msr(struct vcpu *v, u32 msr, int type) { unsigned long *msr_bitmap = v->arch.hvm_vmx.msr_bitmap; + struct domain *d = v->domain; /* VMX MSR bitmap supported? */ if ( msr_bitmap == NULL ) return; + if ( mem_event_check_ring(&d->mem_event->access) ) + { + /* Filter out MSR-s needed for memory introspection */ + switch ( msr ) + { + case MSR_IA32_SYSENTER_EIP: + case MSR_IA32_SYSENTER_ESP: + case MSR_IA32_SYSENTER_CS: + case MSR_IA32_MC0_CTL: + case MSR_STAR: + case MSR_LSTAR: + + gdprintk(XENLOG_DEBUG, "MSR 0x%08x " + "needed for memory introspection, still intercepted\n", + msr); + return; + + default: + break; + } + } + /* * See Intel PRM Vol. 3, 20.6.9 (MSR-Bitmap Address). Early manuals * have the write-low and read-high bitmap offsets the wrong way round. diff --git a/xen/arch/x86/mm/mem_event.c b/xen/arch/x86/mm/mem_event.c index 40ae841..050a1fa 100644 --- a/xen/arch/x86/mm/mem_event.c +++ b/xen/arch/x86/mm/mem_event.c @@ -30,6 +30,7 @@ #include <asm/mem_access.h> #include <asm/mem_sharing.h> #include <xsm/xsm.h> +#include <asm/hvm/vmx/vmcs.h> /* for public/io/ring.h macros */ #define xen_mb() mb() @@ -600,6 +601,22 @@ int mem_event_domctl(struct domain *d, xen_domctl_mem_event_op_t *mec, rc = mem_event_enable(d, mec, med, _VPF_mem_access, HVM_PARAM_ACCESS_RING_PFN, mem_access_notification); + if ( rc == 0 ) + { + struct vcpu *v; + + /* Enable interception for MSRs needed for memory introspection. */ + for_each_vcpu ( d, v ) + { + /* Safe, because of previous if ( !cpu_has_vmx ) check. */ + vmx_enable_intercept_for_msr(v, MSR_IA32_SYSENTER_EIP, MSR_TYPE_W); + vmx_enable_intercept_for_msr(v, MSR_IA32_SYSENTER_ESP, MSR_TYPE_W); + vmx_enable_intercept_for_msr(v, MSR_IA32_SYSENTER_CS, MSR_TYPE_W); + vmx_enable_intercept_for_msr(v, MSR_IA32_MC0_CTL, MSR_TYPE_W); + vmx_enable_intercept_for_msr(v, MSR_STAR, MSR_TYPE_W); + vmx_enable_intercept_for_msr(v, MSR_LSTAR, MSR_TYPE_W); + } + } } break; -- 1.7.9.5 _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |