[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v6 00/15] Arm cache coloring


  • To: Carlo Nonato <carlo.nonato@xxxxxxxxxxxxxxx>, <xen-devel@xxxxxxxxxxxxxxxxxxxx>
  • From: Michal Orzel <michal.orzel@xxxxxxx>
  • Date: Tue, 30 Jan 2024 10:13:32 +0100
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 165.204.84.17) smtp.rcpttodomain=minervasys.tech smtp.mailfrom=amd.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com; dkim=none (message not signed); arc=none (0)
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=M1tf0h5fDWJUu9J22LY4VQrraOHY8XbiKGDI1Z0h7fQ=; b=NI7OfFjrKNtqZR1Rq3Mhq1xLYEBOvFB2WwN6CBBxXGaCjIOU0KJnf61lzO5MjdQCtXemJsTfN5TOVql0+5CCaUimbNhA7f01TJ9U/iyQl5Z5EU6IS9YBlAnbFaXvBMqybbZzHKB9QatZEGO7jYkoOrnCwkbfE2xhdtIqmgjG3vAVbMMu3gQBPHfh5TGu4bBpn9HswnllPx/Wd3VmLuAL4bHTbrDy4hLfnMm4qbaAiyW3wqhSQEcBMXLXYo5VSc8s1olwQudroUp/056WL4ETkuidenJ4fHAoUxIeR6bLEcjsvkRqRY18j2b3oiDqLGyNmMwvtCYSM5E8me67UYm3GA==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=OPu0Ox+xxeHR8fB42yv4LSJ+qheJSjCK5CSMI5MAo82PkOZvYxoxID9t/xopjki7OL2vjVXJPkZVbqHVMH9twJeR+LA7LFtetP5qmx2/d8ssuZEr0WnmdH3b3zsAaPmGTMhdlFiWRBgKeZZk0I5AXUtRFZ+5LEhFWI39KSnUVE6HqvCgeKMHPCh/LI5RQt2oSj562J1vC1ybN3zbVyLHpqwkofNHxf2iFfx826PWcA8eSdf/GesUoELeFxuYK1iFAxbAUkp3X2vWKtNTEyGNfl3PoZHxmuEW2b58im/OGAipZ10rdM8F7HjGxd4hU6S1m1ooRkTLE+ngPFS0CVFErA==
  • Cc: <andrea.bastoni@xxxxxxxxxxxxxxx>, Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, George Dunlap <george.dunlap@xxxxxxxxxx>, Jan Beulich <jbeulich@xxxxxxxx>, Julien Grall <julien@xxxxxxx>, Stefano Stabellini <sstabellini@xxxxxxxxxx>, Wei Liu <wl@xxxxxxx>, Bertrand Marquis <bertrand.marquis@xxxxxxx>, Volodymyr Babchuk <Volodymyr_Babchuk@xxxxxxxx>, Anthony PERARD <anthony.perard@xxxxxxxxxx>, Juergen Gross <jgross@xxxxxxxx>
  • Delivery-date: Tue, 30 Jan 2024 09:13:51 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

Hi Carlo,

On 29/01/2024 18:17, Carlo Nonato wrote:
> 
> 
> Shared caches in multi-core CPU architectures represent a problem for
> predictability of memory access latency. This jeopardizes applicability
> of many Arm platform in real-time critical and mixed-criticality
> scenarios. We introduce support for cache partitioning with page
> coloring, a transparent software technique that enables isolation
> between domains and Xen, and thus avoids cache interference.
> 
> When creating a domain, a simple syntax (e.g. `0-3` or `4-11`) allows
> the user to define assignments of cache partitions ids, called colors,
> where assigning different colors guarantees no mutual eviction on cache
> will ever happen. This instructs the Xen memory allocator to provide
> the i-th color assignee only with pages that maps to color i, i.e. that
> are indexed in the i-th cache partition.
> 
> The proposed implementation supports the dom0less feature.
> The proposed implementation doesn't support the static-mem feature.
> The solution has been tested in several scenarios, including Xilinx Zynq
> MPSoCs.
> 
> Open points:
> - Michal found some problem here
> https://patchew.org/Xen/20230123154735.74832-1-carlo.nonato@xxxxxxxxxxxxxxx/20230123154735.74832-4-carlo.nonato@xxxxxxxxxxxxxxx/#a7a06a26-ae79-402c-96a4-a1ebfe8b5862@xxxxxxx
>   but I havent fully understood it. In the meantime I want to advance with v6,
>   so I hope we can continue the discussion here.
The problem is that when LLC coloring is enabled, you use allocate_memory() for 
hwdom, just like for any
other domain, so it will get assigned a VA range from a typical Xen guest 
memory map (i.e. GUEST_RAM{0,1}_{BASE,SIZE}).
This can result in memory conflicts given that the HW resources are mapped 1:1 
to it (MMIO, reserved memory regions).
Instead, for hwdom we should use the host memory layout to prevent these 
conflicts. A good example is find_unallocated_memory().
You need to:
 - fetch available RAM,
 - remove reserved-memory regions,
 - report ranges (+aligning the base and skipping banks that are not reasonable 
big)
This will give you a list of memory regions you can then use to pass to 
allocate_bank_memory().
The problem, as always, is to determine the size of the first region so that is 
is sufficiently
large to keep kernel+dtb+initrd in relatively close proximity.

~Michal




 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.