[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [PATCH v3 02/18] xen/arm64: flushtlb: Implement the TLBI repeat workaround for TLB flush by VA
From: Julien Grall <jgrall@xxxxxxxxxx> Looking at the Neoverse N1 errata document, it is not clear to me why the TLBI repeat workaround is not applied for TLB flush by VA. The TBL flush by VA helpers are used in flush_xen_tlb_range_va_local() and flush_xen_tlb_range_va(). So if the range size if a fixed size smaller than a PAGE_SIZE, it would be possible that the compiler remove the loop and therefore replicate the sequence described in the erratum 1286807. So the TLBI repeat workaround should also be applied for the TLB flush by VA helpers. Fixes: 22e323d115d8 ("xen/arm: Add workaround for Cortex-A76/Neoverse-N1 erratum #1286807") Signed-off-by: Julien Grall <jgrall@xxxxxxxxxx> --- This was spotted while looking at reducing the scope of the memory barriers. I don't have any HW affected. Changes in v3: - Patch added --- xen/arch/arm/include/asm/arm64/flushtlb.h | 31 +++++++++++++++++------ 1 file changed, 23 insertions(+), 8 deletions(-) diff --git a/xen/arch/arm/include/asm/arm64/flushtlb.h b/xen/arch/arm/include/asm/arm64/flushtlb.h index 39d429ace552..5b033c0cb980 100644 --- a/xen/arch/arm/include/asm/arm64/flushtlb.h +++ b/xen/arch/arm/include/asm/arm64/flushtlb.h @@ -44,6 +44,27 @@ static inline void name(void) \ : : : "memory"); \ } +/* + * FLush TLB by VA. This will likely be used in a loop, so the caller + * is responsible to use the appropriate memory barriers before/after + * the sequence. + * + * See above about the ARM64_WORKAROUND_REPEAT_TLBI sequence. + */ +#define TLB_HELPER_VA(name, tlbop) \ +static inline void name(vaddr_t va) \ +{ \ + asm volatile( \ + "tlbi " # tlbop ", %0;" \ + ALTERNATIVE( \ + "nop; nop;", \ + "dsb ish;" \ + "tlbi " # tlbop ", %0;", \ + ARM64_WORKAROUND_REPEAT_TLBI, \ + CONFIG_ARM64_WORKAROUND_REPEAT_TLBI) \ + : : "r" (va >> PAGE_SHIFT) : "memory"); \ +} + /* Flush local TLBs, current VMID only. */ TLB_HELPER(flush_guest_tlb_local, vmalls12e1, nsh); @@ -60,16 +81,10 @@ TLB_HELPER(flush_all_guests_tlb, alle1is, ish); TLB_HELPER(flush_xen_tlb_local, alle2, nsh); /* Flush TLB of local processor for address va. */ -static inline void __flush_xen_tlb_one_local(vaddr_t va) -{ - asm volatile("tlbi vae2, %0;" : : "r" (va>>PAGE_SHIFT) : "memory"); -} +TLB_HELPER_VA(__flush_xen_tlb_one_local, vae2); /* Flush TLB of all processors in the inner-shareable domain for address va. */ -static inline void __flush_xen_tlb_one(vaddr_t va) -{ - asm volatile("tlbi vae2is, %0;" : : "r" (va>>PAGE_SHIFT) : "memory"); -} +TLB_HELPER_VA(__flush_xen_tlb_one, vae2is); #endif /* __ASM_ARM_ARM64_FLUSHTLB_H__ */ /* -- 2.38.1
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |