[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] [PATCH v4 3/7] xen/arm: introduce invalidate_dcache_va_range



Take care of handling non-cacheline aligned addresses and sizes.

Signed-off-by: Stefano Stabellini <stefano.stabellini@xxxxxxxxxxxxx>
Reviewed-by: Julien Grall <julien.grall@xxxxxxxxxx>
Acked-by: Ian Campbell <ian.campbell@xxxxxxxxxx>

---

Changes in v4:
- rename invalidate_xen_dcache_va_range to invalidate_dcache_va_range.
---
 xen/include/asm-arm/arm32/page.h |    3 +++
 xen/include/asm-arm/arm64/page.h |    3 +++
 xen/include/asm-arm/page.h       |   30 ++++++++++++++++++++++++++++++
 3 files changed, 36 insertions(+)

diff --git a/xen/include/asm-arm/arm32/page.h b/xen/include/asm-arm/arm32/page.h
index 20a6a7f..a07e217 100644
--- a/xen/include/asm-arm/arm32/page.h
+++ b/xen/include/asm-arm/arm32/page.h
@@ -19,6 +19,9 @@ static inline void write_pte(lpae_t *p, lpae_t pte)
         : : "r" (pte.bits), "r" (p) : "memory");
 }
 
+/* Inline ASM to invalidate dcache on register R (may be an inline asm 
operand) */
+#define __invalidate_dcache_one(R) STORE_CP32(R, DCIMVAC)
+
 /* Inline ASM to flush dcache on register R (may be an inline asm operand) */
 #define __clean_dcache_one(R) STORE_CP32(R, DCCMVAC)
 
diff --git a/xen/include/asm-arm/arm64/page.h b/xen/include/asm-arm/arm64/page.h
index 101d7a8..1fd416d 100644
--- a/xen/include/asm-arm/arm64/page.h
+++ b/xen/include/asm-arm/arm64/page.h
@@ -14,6 +14,9 @@ static inline void write_pte(lpae_t *p, lpae_t pte)
         : : "r" (pte.bits), "r" (p) : "memory");
 }
 
+/* Inline ASM to invalidate dcache on register R (may be an inline asm 
operand) */
+#define __invalidate_dcache_one(R) "dc ivac, %" #R ";"
+
 /* Inline ASM to flush dcache on register R (may be an inline asm operand) */
 #define __clean_dcache_one(R) "dc cvac, %" #R ";"
 
diff --git a/xen/include/asm-arm/page.h b/xen/include/asm-arm/page.h
index 1327b00..d88a586 100644
--- a/xen/include/asm-arm/page.h
+++ b/xen/include/asm-arm/page.h
@@ -268,6 +268,36 @@ extern size_t cacheline_bytes;
 /* Functions for flushing medium-sized areas.
  * if 'range' is large enough we might want to use model-specific
  * full-cache flushes. */
+
+static inline void invalidate_dcache_va_range(const void *p, unsigned long 
size)
+{
+    size_t off;
+    const void *end = p + size;
+
+    dsb(sy);           /* So the CPU issues all writes to the range */
+
+    off = (unsigned long)p % cacheline_bytes;
+    if ( off )
+    {
+        p -= off;
+        asm volatile (__clean_and_invalidate_dcache_one(0) : : "r" (p));
+        p += cacheline_bytes;
+        size -= cacheline_bytes - off;
+    }
+    off = (unsigned long)end % cacheline_bytes;
+    if ( off )
+    {
+        end -= off;
+        size -= off;
+        asm volatile (__clean_and_invalidate_dcache_one(0) : : "r" (end));
+    }
+
+    for ( ; p < end; p += cacheline_bytes )
+        asm volatile (__invalidate_dcache_one(0) : : "r" (p));
+
+    dsb(sy);           /* So we know the flushes happen before continuing */
+}
+
 static inline void clean_dcache_va_range(const void *p, unsigned long size)
 {
     const void *end;
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.