[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH 22/36] xen/arch: init cache coloring conf for Xen



Hi,

On 04/03/2022 17:46, Marco Solieri wrote:
From: Luca Miccio <lucmiccio@xxxxxxxxx>

Add initialization for Xen coloring data. By default, use the lowest
color index available.

Benchmarking the VM interrupt response time provides an estimation of
LLC usage by Xen's most latency-critical runtime task.  Results on Arm
Cortex-A53 on Xilinx Zynq UltraScale+ XCZU9EG show that one color, which
reserves 64 KiB of L2, is enough to attain best responsiveness.

More colors are instead very likely to be needed on processors whose L1
cache is physically-indexed and physically-tagged, such as Cortex-A57.
In such cases, coloring applies to L1 also, and there typically are two
distinct L1-colors. Therefore, reserving only one color for Xen would
senselessly partitions a cache memory that is already private, i.e.
underutilize it. The default amount of Xen colors is thus set to one.

Signed-off-by: Luca Miccio <206497@xxxxxxxxxxxxxxxxxxx>
Signed-off-by: Marco Solieri <marco.solieri@xxxxxxxxxxxxxxx>
---
  xen/arch/arm/coloring.c | 31 ++++++++++++++++++++++++++++++-
  1 file changed, 30 insertions(+), 1 deletion(-)

diff --git a/xen/arch/arm/coloring.c b/xen/arch/arm/coloring.c
index d1ac193a80..761414fcd7 100644
--- a/xen/arch/arm/coloring.c
+++ b/xen/arch/arm/coloring.c
@@ -30,10 +30,18 @@
  #include <asm/coloring.h>
  #include <asm/io.h>
+/* By default Xen uses the lowestmost color */
+#define XEN_COLOR_DEFAULT_MASK 0x0001

You are setting a uint32_t value. So it should be 0x00000001.

But I think it is a bit confusing to define a mask here. Instead, I would define the color ID and set the bit.

+#define XEN_COLOR_DEFAULT_NUM 1
+/* Current maximum useful colors */
+#define MAX_XEN_COLOR   128 > +
  /* Number of color(s) assigned to Xen */
  static uint32_t xen_col_num;
  /* Coloring configuration of Xen as bitmask */
  static uint32_t xen_col_mask[MAX_COLORS_CELLS];
+/* Xen colors IDs */
+static uint32_t xen_col_list[MAX_XEN_COLOR];

Why do we need to store xen colors in both a bitmask form and an array of color ID?

/* Number of color(s) assigned to Dom0 */
  static uint32_t dom0_col_num;
@@ -216,7 +224,7 @@ uint32_t get_max_colors(void)
bool __init coloring_init(void)
  {
-    int i;
+    int i, rc;
printk(XENLOG_INFO "Initialize XEN coloring: \n");
      /*
@@ -266,6 +274,27 @@ bool __init coloring_init(void)
      printk(XENLOG_INFO "Color bits in address: 0x%"PRIx64"\n", addr_col_mask);
      printk(XENLOG_INFO "Max number of colors: %u\n", max_col_num);
+ if ( !xen_col_num )
+    {
+        xen_col_mask[0] = XEN_COLOR_DEFAULT_MASK;
+        xen_col_num = XEN_COLOR_DEFAULT_NUM;
+        printk(XENLOG_WARNING "Xen color configuration not found. Using 
default\n");
+    }
+
+    printk(XENLOG_INFO "Xen color configuration: 
0x%"PRIx32"%"PRIx32"%"PRIx32"%"PRIx32"\n",
+            xen_col_mask[3], xen_col_mask[2], xen_col_mask[1], 
xen_col_mask[0]);

You are making the assumption that MAX_COLORS_CELLS is always 4. This may be more or worse less. So this should be rework to avoid making any assumption on the size.

I expect switching to the generic bitmask will help here.

+    rc = copy_mask_to_list(xen_col_mask, xen_col_list, xen_col_num);
+
+    if ( rc )
+        return false;
+
+    for ( i = 0; i < xen_col_num; i++ )
+        if ( xen_col_list[i] > (max_col_num - 1) )
+        {
+            printk(XENLOG_ERR "ERROR: max. color value not supported\n");
+            return false;
+        }
+
      return true;
  }

Cheers,

--
Julien Grall



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.