[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] [PATCH v3 6/6] xen/arm: disable CPUs with different cacheline sizes



Even different cpus in big.LITTLE systems are expected to have the same
cacheline size. Unless the minimum of all cacheline sizes is used across
all cpu cores, cache coherency protocols can go wrong. Instead, for
now, just disable any cpu with a different cacheline size.

This check is not covered by the hmp-unsafe option, because even with
the correct scheduling and vcpu pinning in place, the system breaks if
cacheline sizes differ across cores. We don't believe it is a problem
for most big.LITTLE systems.

This patch moves the implementation of setup_cache to a static inline,
still setting cacheline_bytes at the beginning of start_xen as before.

In start_secondary we check that the cacheline sizes match, otherwise we
disable the cpu.

Suggested-by: Julien Grall <julien.grall@xxxxxxx>
Signed-off-by: Stefano Stabellini <sstabellini@xxxxxxxxxx>

---
Changes in v3:
- new patch

---
Interestingly I couldn't find a better way in C89 to printk a size_t
than casting it to unsigned long.
---
 xen/arch/arm/setup.c       | 15 +--------------
 xen/arch/arm/smpboot.c     |  8 ++++++++
 xen/include/asm-arm/page.h | 12 ++++++++++++
 3 files changed, 21 insertions(+), 14 deletions(-)

diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
index 032a6a8..b5f4c3a 100644
--- a/xen/arch/arm/setup.c
+++ b/xen/arch/arm/setup.c
@@ -682,19 +682,6 @@ static void __init setup_mm(unsigned long dtb_paddr, 
size_t dtb_size)
 
 size_t __read_mostly cacheline_bytes;
 
-/* Very early check of the CPU cache properties */
-void __init setup_cache(void)
-{
-    uint32_t ccsid;
-
-    /* Read the cache size ID register for the level-0 data cache */
-    WRITE_SYSREG32(0, CSSELR_EL1);
-    ccsid = READ_SYSREG32(CCSIDR_EL1);
-
-    /* Low 3 bits are log2(cacheline size in words) - 2. */
-    cacheline_bytes = 1U << (4 + (ccsid & 0x7));
-}
-
 /* C entry point for boot CPU */
 void __init start_xen(unsigned long boot_phys_offset,
                       unsigned long fdt_paddr,
@@ -708,7 +695,7 @@ void __init start_xen(unsigned long boot_phys_offset,
     struct domain *dom0;
     struct xen_arch_domainconfig config;
 
-    setup_cache();
+    cacheline_bytes = read_cacheline_size();
 
     percpu_init_areas();
     set_processor_id(0); /* needed early, for smp_processor_id() */
diff --git a/xen/arch/arm/smpboot.c b/xen/arch/arm/smpboot.c
index 04efb33..153572e 100644
--- a/xen/arch/arm/smpboot.c
+++ b/xen/arch/arm/smpboot.c
@@ -323,6 +323,14 @@ void start_secondary(unsigned long boot_phys_offset,
         stop_cpu();
     }
 
+    if ( cacheline_bytes != read_cacheline_size() )
+    {
+        printk(XENLOG_ERR "CPU%u cacheline size (%lu) does not match the boot 
CPU (%lu)\n",
+               smp_processor_id(), (unsigned long) read_cacheline_size(),
+               (unsigned long) cacheline_bytes);
+        stop_cpu();
+    }
+
     mmu_init_secondary_cpu();
 
     gic_init_secondary_cpu();
diff --git a/xen/include/asm-arm/page.h b/xen/include/asm-arm/page.h
index d948250..9fbf232 100644
--- a/xen/include/asm-arm/page.h
+++ b/xen/include/asm-arm/page.h
@@ -138,6 +138,18 @@ extern size_t cacheline_bytes;
 
 #define copy_page(dp, sp) memcpy(dp, sp, PAGE_SIZE)
 
+static inline size_t read_cacheline_size(void)
+{
+    uint32_t ccsid;
+
+    /* Read the cache size ID register for the level-0 data cache */
+    WRITE_SYSREG32(0, CSSELR_EL1);
+    ccsid = READ_SYSREG32(CCSIDR_EL1);
+
+    /* Low 3 bits are log2(cacheline size in words) - 2. */
+    return (size_t) (1U << (4 + (ccsid & 0x7)));
+}
+
 /* Functions for flushing medium-sized areas.
  * if 'range' is large enough we might want to use model-specific
  * full-cache flushes. */
-- 
1.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.