[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-changelog] [xen-4.0-testing] VT-d: No need to emulate WBINVD when force snooping feature available
# HG changeset patch # User Keir Fraser <keir.fraser@xxxxxxxxxx> # Date 1278328298 -3600 # Node ID 1952867c5c92e7e4eef9ba4205995973de01aae0 # Parent c3a9322b5dc3b62dc07882f271d2d79375287438 VT-d: No need to emulate WBINVD when force snooping feature available There is no cache coherency issue if VT-d engine's force snooping feature available. Signed-off-by: Sheng Yang <sheng@xxxxxxxxxxxxxxx> xen-unstable changeset: 21715:70ac5171a48f xen-unstable date: Mon Jul 05 08:28:08 2010 +0100 --- xen/arch/x86/hvm/vmx/vmcs.c | 4 +++- xen/arch/x86/hvm/vmx/vmx.c | 3 +++ 2 files changed, 6 insertions(+), 1 deletion(-) diff -r c3a9322b5dc3 -r 1952867c5c92 xen/arch/x86/hvm/vmx/vmcs.c --- a/xen/arch/x86/hvm/vmx/vmcs.c Mon Jul 05 12:11:17 2010 +0100 +++ b/xen/arch/x86/hvm/vmx/vmcs.c Mon Jul 05 12:11:38 2010 +0100 @@ -989,8 +989,10 @@ void vmx_do_resume(struct vcpu *v) * 1: flushing cache (wbinvd) when the guest is scheduled out if * there is no wbinvd exit, or * 2: execute wbinvd on all dirty pCPUs when guest wbinvd exits. + * If VT-d engine can force snooping, we don't need to do these. */ - if ( has_arch_pdevs(v->domain) && !cpu_has_wbinvd_exiting ) + if ( has_arch_pdevs(v->domain) && !iommu_snoop + && !cpu_has_wbinvd_exiting ) { int cpu = v->arch.hvm_vmx.active_cpu; if ( cpu != -1 ) diff -r c3a9322b5dc3 -r 1952867c5c92 xen/arch/x86/hvm/vmx/vmx.c --- a/xen/arch/x86/hvm/vmx/vmx.c Mon Jul 05 12:11:17 2010 +0100 +++ b/xen/arch/x86/hvm/vmx/vmx.c Mon Jul 05 12:11:38 2010 +0100 @@ -2101,6 +2101,9 @@ static void vmx_wbinvd_intercept(void) if ( !has_arch_mmios(current->domain) ) return; + if ( iommu_snoop ) + return; + if ( cpu_has_wbinvd_exiting ) on_each_cpu(wbinvd_ipi, NULL, 1); else _______________________________________________ Xen-changelog mailing list Xen-changelog@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-changelog
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |