|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [PATCH v3 04/18] xen/arm: flushtlb: Reduce scope of barrier for the TLB range flush
From: Julien Grall <jgrall@xxxxxxxxxx>
At the moment, flush_xen_tlb_range_va{,_local}() are using system
wide memory barrier. This is quite expensive and unnecessary.
For the local version, a non-shareable barrier is sufficient.
For the SMP version, a inner-shareable barrier is sufficient.
Furthermore, the initial barrier only need to a store barrier.
For the full explanation of the sequence see asm/arm{32,64}/flushtlb.h.
Signed-off-by: Julien Grall <jgrall@xxxxxxxxxx>
---
Changes in v3:
- Patch added
---
xen/arch/arm/include/asm/flushtlb.h | 10 ++++++----
1 file changed, 6 insertions(+), 4 deletions(-)
diff --git a/xen/arch/arm/include/asm/flushtlb.h
b/xen/arch/arm/include/asm/flushtlb.h
index 125a141975e0..e45fb6d97b02 100644
--- a/xen/arch/arm/include/asm/flushtlb.h
+++ b/xen/arch/arm/include/asm/flushtlb.h
@@ -37,13 +37,14 @@ static inline void flush_xen_tlb_range_va_local(vaddr_t va,
{
vaddr_t end = va + size;
- dsb(sy); /* Ensure preceding are visible */
+ /* See asm/arm{32,64}/flushtlb.h for the explanation of the sequence. */
+ dsb(nshst); /* Ensure prior page-tables updates have completed */
while ( va < end )
{
__flush_xen_tlb_one_local(va);
va += PAGE_SIZE;
}
- dsb(sy); /* Ensure completion of the TLB flush */
+ dsb(nsh); /* Ensure the TLB invalidation has completed */
isb();
}
@@ -56,13 +57,14 @@ static inline void flush_xen_tlb_range_va(vaddr_t va,
{
vaddr_t end = va + size;
- dsb(sy); /* Ensure preceding are visible */
+ /* See asm/arm{32,64}/flushtlb.h for the explanation of the sequence. */
+ dsb(ishst); /* Ensure prior page-tables updates have completed */
while ( va < end )
{
__flush_xen_tlb_one(va);
va += PAGE_SIZE;
}
- dsb(sy); /* Ensure completion of the TLB flush */
+ dsb(ish); /* Ensure the TLB invalidation has completed */
isb();
}
--
2.38.1
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |