[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v5 09/13] xen: add cache coloring allocator for domains



On Wed, 10 Jan 2024, Jan Beulich wrote:
> On 10.01.2024 01:46, Stefano Stabellini wrote:
> > On Tue, 9 Jan 2024, Jan Beulich wrote:
> >> On 02.01.2024 10:51, Carlo Nonato wrote:
> >>> This commit adds a new memory page allocator that implements the cache
> >>> coloring mechanism. The allocation algorithm enforces equal frequency
> >>> distribution of cache partitions, following the coloring configuration of 
> >>> a
> >>> domain. This allows an even utilization of cache sets for every domain.
> >>>
> >>> Pages are stored in a color-indexed array of lists. Those lists are filled
> >>> by a simple init function which computes the color of each page.
> >>> When a domain requests a page, the allocator extract the page from the 
> >>> list
> >>> with the maximum number of free pages between those that the domain can
> >>> access, given its coloring configuration.
> >>>
> >>> The allocator can only handle requests of order-0 pages. This allows for
> >>> easier implementation and since cache coloring targets only embedded 
> >>> systems,
> >>> it's assumed not to be a major problem.
> >>
> >> I'm curious about the specific properties of embedded systems that makes
> >> the performance implications of deeper page walks less of an issue for
> >> them.
> > 
> > I think Carlo meant to say that embedded systems tend to have a smaller
> > amount of RAM (our boards today have 4-8GB of total memory). So higher
> > level allocations (2MB/1GB) might not be possible.
> > 
> > Also, domains that care about interrupt latency tend to be RTOSes (e.g.
> > Zephyr, FreeRTOS) and RTOSes are happy to run with less than 1MB of
> > total memory available. This is so true that I vaguely remember hitting
> > a bug in xl/libxl when I tried to create a domain with 128KB of memory. 
> > 
> > 
> >> Nothing is said about address-constrained allocations. Are such entirely
> >> of no interest to domains on Arm, not even to Dom0 (e.g. for filling
> >> Linux'es swiotlb)?
> > 
> > Cache coloring is useful if you can use an IOMMU with all the
> > dma-capable devices. If that is not the case, then not even Dom0 would
> > be able to boot with cache coloring enabled (because it wouldn't be 1:1
> > mapped).
> > 
> > On ARM we only support booting Dom0 1:1 mapped, or not-1:1-mapped but
> > relying on the IOMMU.
> 
> So another constraint to be enforced both at the Kconfig level and at
> runtime?

Yeah potentially


> That said, Linux'es swiotlb allocation can't know whether an
> IOMMU is in use by Xen.

Well, not exactly but we have XENFEAT_direct_mapped and
XENFEAT_not_direct_mapped, that is how normally the kernel knows how to
behave


> If something like that was done in a Dom0, the
> respective allocations still wouldn't really work correctly (and the
> kernel may or may not choke on this).
 



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.