[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH 6/8] pdx: introduce a new compression algorithm based on offsets between regions



On Wed, Jun 18, 2025 at 03:02:34PM +0200, Jan Beulich wrote:
> On 11.06.2025 19:16, Roger Pau Monne wrote:
> > With the appearance of Intel Sierra Forest and Granite Rapids it's not
> > possible to get a production x86 host wit the following memory map:
> > 
> > SRAT: Node 0 PXM 0 [0000000000000000, 000000007fffffff]
> > SRAT: Node 0 PXM 0 [0000000100000000, 000000407fffffff]
> > SRAT: Node 1 PXM 1 [0000061e80000000, 0000065e7fffffff]
> > SRAT: Node 2 PXM 2 [00000c3e80000000, 00000c7e7fffffff]
> > SRAT: Node 3 PXM 3 [0000125e80000000, 0000129e7fffffff]
> 
> Looks like these aren't the numbers that the test harness uses. The main
> properties (relevant for the algorithms) look to be the same, though.

Yeah, seems like the report we got has two different memory setups,
one with 1TB, and one with 2TB of total RAM.  The layout however
(offsets) are the same.

> > ---
> > We can discuss whether we want both the fast and the slow variants.  The
> > slow (brute force) was added as a result of me playing with weird region
> > layouts where the fast one didn't manage to compress, or the resulting
> > coefficients had a poor compression ratio.  However at this point the
> > slow variant has only proven helpful in synthetic cases, I haven't (yet?)
> > seen a real host memory layout that would benefit from it.
> 
> I'm not convinced we need the slow variant right away. What exactly (if
> anything) is going to be wanted/needed on future hardware we'll only really
> know when such arrives. However, see also my comment on xen/pdx.h.

I've implemented the lookup table as you suggested, and I think that's
more efficient than the current div plus mod.  With the lookup table
implementation there's no longer a fast vs slow variants.

> > @@ -297,7 +309,223 @@ void __init pfn_pdx_compression_reset(void)
> >      pfn_pdx_hole_shift = 0;
> >  }
> >  
> > -#endif /* CONFIG_PDX_COMPRESSION */
> > +#elif defined(CONFIG_PDX_OFFSET_COMPRESSION) /* 
> > CONFIG_PDX_MASK_COMPRESSION */
> > +
> > +unsigned long __ro_after_init pdx_offset = ~0UL;
> > +unsigned long __ro_after_init pdx_size = ~0UL;
> > +
> > +static unsigned long __initdata top_pfn;
> > +
> > +bool pdx_is_region_compressible(paddr_t base, unsigned long npages)
> > +{
> > +    return !pdx_size ? true
> > +                     : (PFN_DOWN(base) % pdx_offset) + npages <= pdx_size;
> > +}
> > +
> > +STATIC bool __init pfn_offset_calculate_fast(unsigned long base)
> > +{
> > +    unsigned long size = max((1UL << MAX_ORDER), base);
> 
> Since pfn_pdx_compression_setup(), where "base" originally comes from, has no
> caller (yet), it's hard to follow what "base" is and why it would affect 
> "size".

pfn_pdx_compression_setup() has existing callers from patch 4: "pdx:
provide a unified set of unit functions".

> > +    unsigned long offset = ~0UL;
> > +    unsigned int i;
> > +
> > +    if ( nr <= 1 )
> > +        return false;
> > +
> > +    /* Calculate minimal offset between regions. */
> > +    for ( i = 1; i < nr; i++ )
> > +        offset = min(offset, ranges[i].base - ranges[i - 1].base);
> > +
> > +    /* Return early if offset is smaller than the minimum size. */
> > +    if ( size >= offset )
> > +        return false;
> 
> Comment and code are off by one with one another. I actually wonder whether, 
> for
> the scheme to be beneficial, there shouldn't be some threshold below which we
> would also go without doing any compression.

Yeah, I've wondered the same, but I didn't know where to put the
threshold.

> > --- a/xen/include/xen/pdx.h
> > +++ b/xen/include/xen/pdx.h
> > @@ -65,6 +65,31 @@
> >   * This scheme also holds for multiple regions, where HHHHHHH acts as
> >   * the region identifier and LLLLLL fully contains the span of every
> >   * region involved.
> > + *
> > + * ## PDX offset compression
> > + *
> > + * Alternative compression mechanism that relies on RAM ranges having a 
> > similar
> > + * size and offset between them:
> > + *
> > + * ┌────────┬──────────┬────────┬──────────┐   ┌────────┬──────────┐
> > + * │ RAM 0  │          │ RAM 1  │          │...│ RAM N  │          │
> > + * ├────────┼──────────┼────────┴──────────┘   └────────┴──────────┘
> > + * │<------>│          │
> > + * │  size             │
> > + * │<----------------->│
> > + *         offset
> > + *
> > + * The compression removes the holes between RAM regions:
> > + *
> > + * ┌────────┬────────┐   ┌────────┐
> > + * │ RAM 0  │ RAM 1  │...│ RAM N  │
> > + * ├────────┼────────┘   └────────┘
> > + * │<------>│
> > + *    size
> > + *
> > + * The compressed index is calculated as:
> > + *
> > + * index = (pfn % offset) + ((pfn / offset) * size)
> >   */
> 
> Would be nice imo if the presentation here wouldn't give the impression that
> the offsets are all identical, and hence the compressed map ends up being
> entirely without holes.

OK, I've made this too nice I guess.

> In fact I can't help the impression that with enough
> nodes (but otherwise the same properties, i.e. only node 0 being special) at
> least the "fast" calculation will fail. Which in turn would be an argument
> to keep the "slow" one.

It depends.  Fast calculation assumes the minimal offset and then just
adjusts the size.  If the offset is not the same between ranges this
leads to size being expanded to ensure the selected offset works with
all ranges.  If there are enough nodes, and spread enough it's
possible for the algorithm to converge in offset == size.

> In fact, the alternative approach we discussed - avoiding division, but
> using a table of offsets - would seem to avoid such a weakness, because the
> offsets can then vary (to some degree; it still needs to be possible to
> easily determine the table indexes).

Yeah, that's what I'm preparing to send.

Thanks, Roger.



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.