[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-changelog] [xen staging] x86/hap: improve hypervisor assisted guest TLB flush
commit c9495bd7dff587ce770b2318037d6a1d0511bd72 Author: Roger Pau Monné <roger.pau@xxxxxxxxxx> AuthorDate: Tue Mar 10 15:30:27 2020 +0100 Commit: Jan Beulich <jbeulich@xxxxxxxx> CommitDate: Tue Mar 10 15:30:27 2020 +0100 x86/hap: improve hypervisor assisted guest TLB flush The current implementation of the hypervisor assisted flush for HAP is extremely inefficient. First of all there's no need to call paging_update_cr3, as the only relevant part of that function when doing a flush is the ASID vCPU flush, so just call that function directly. Since hvm_asid_flush_vcpu is protected against concurrent callers by using atomic operations there's no need anymore to pause the affected vCPUs. Finally the global TLB flush performed by flush_tlb_mask is also not necessary, since we only want to flush the guest TLB state it's enough to trigger a vmexit on the pCPUs currently holding any vCPU state, as such vmexit will already perform an ASID/VPID update, and thus clear the guest TLB. Signed-off-by: Roger Pau Monné <roger.pau@xxxxxxxxxx> Reviewed-by: Wei Liu <wl@xxxxxxx> Reviewed-by: Jan Beulich <jbeulich@xxxxxxxx> --- xen/arch/x86/mm/hap/hap.c | 46 +++++++++++++++++++--------------------------- 1 file changed, 19 insertions(+), 27 deletions(-) diff --git a/xen/arch/x86/mm/hap/hap.c b/xen/arch/x86/mm/hap/hap.c index 005942e6ff..a6d5e39b02 100644 --- a/xen/arch/x86/mm/hap/hap.c +++ b/xen/arch/x86/mm/hap/hap.c @@ -674,32 +674,24 @@ static void hap_update_cr3(struct vcpu *v, int do_locking, bool noflush) hvm_update_guest_cr3(v, noflush); } +/* + * Dummy function to use with on_selected_cpus in order to trigger a vmexit on + * selected pCPUs. When the VM resumes execution it will get a new ASID/VPID + * and thus a clean TLB. + */ +static void dummy_flush(void *data) +{ +} + static bool flush_tlb(bool (*flush_vcpu)(void *ctxt, struct vcpu *v), void *ctxt) { static DEFINE_PER_CPU(cpumask_t, flush_cpumask); cpumask_t *mask = &this_cpu(flush_cpumask); struct domain *d = current->domain; + unsigned int this_cpu = smp_processor_id(); struct vcpu *v; - /* Avoid deadlock if more than one vcpu tries this at the same time. */ - if ( !spin_trylock(&d->hypercall_deadlock_mutex) ) - return false; - - /* Pause all other vcpus. */ - for_each_vcpu ( d, v ) - if ( v != current && flush_vcpu(ctxt, v) ) - vcpu_pause_nosync(v); - - /* Now that all VCPUs are signalled to deschedule, we wait... */ - for_each_vcpu ( d, v ) - if ( v != current && flush_vcpu(ctxt, v) ) - while ( !vcpu_runnable(v) && v->is_running ) - cpu_relax(); - - /* All other vcpus are paused, safe to unlock now. */ - spin_unlock(&d->hypercall_deadlock_mutex); - cpumask_clear(mask); /* Flush paging-mode soft state (e.g., va->gfn cache; PAE PDPE cache). */ @@ -710,20 +702,20 @@ static bool flush_tlb(bool (*flush_vcpu)(void *ctxt, struct vcpu *v), if ( !flush_vcpu(ctxt, v) ) continue; - paging_update_cr3(v, false); + hvm_asid_flush_vcpu(v); cpu = read_atomic(&v->dirty_cpu); - if ( is_vcpu_dirty_cpu(cpu) ) + if ( cpu != this_cpu && is_vcpu_dirty_cpu(cpu) ) __cpumask_set_cpu(cpu, mask); } - /* Flush TLBs on all CPUs with dirty vcpu state. */ - flush_tlb_mask(mask); - - /* Done. */ - for_each_vcpu ( d, v ) - if ( v != current && flush_vcpu(ctxt, v) ) - vcpu_unpause(v); + /* + * Trigger a vmexit on all pCPUs with dirty vCPU state in order to force an + * ASID/VPID change and hence accomplish a guest TLB flush. Note that vCPUs + * not currently running will already be flushed when scheduled because of + * the ASID tickle done in the loop above. + */ + on_selected_cpus(mask, dummy_flush, mask, 0); return true; } -- generated by git-patchbot for /home/xen/git/xen.git#staging _______________________________________________ Xen-changelog mailing list Xen-changelog@xxxxxxxxxxxxxxxxxxxx https://lists.xenproject.org/xen-changelog
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |