|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH v1 3/5] xen/riscv: introduce setup_mm()
On Thu, 2024-10-17 at 17:15 +0200, Jan Beulich wrote:
> On 16.10.2024 11:15, Oleksii Kurochko wrote:
>
> > + if ( map_pages_to_xen((vaddr_t)mfn_to_virt(0),
> > + _mfn(0), nr_mfns,
> > + PAGE_HYPERVISOR_RW) )
> > + panic("Unable to setup the directmap mappings.\n");
> > +}
> > +
> > +/* Map a frame table to cover physical addresses ps through pe */
> > +static void __init setup_frametable_mappings(paddr_t ps, paddr_t
> > pe)
> > +{
> > + unsigned long nr_mfns = mfn_x(mfn_add(maddr_to_mfn(pe), -1)) -
>
> This looks to be accounting for a partial page at the end.
>
> > + mfn_x(maddr_to_mfn(ps)) + 1;
>
> Whereas this doesn't do the same at the start. The sole present
> caller
> passes 0, so that's going to be fine for the time being. Yet it's a
> latent pitfall. I'd recommend to either drop the function parameter,
> or
> to deal with it correctly right away.
Then it will be better to do in the next way to be sure that everything
is properly aligned:
unsigned long nr_mfns = PFN_DOWN(ALIGN_UP(pe) - ALIGN_DOWN(ps));
> > + memset(&frame_table[0], 0, nr_mfns * sizeof(struct
> > page_info));
> > + memset(&frame_table[nr_mfns], -1,
> > + frametable_size - (nr_mfns * sizeof(struct
> > page_info)));
> > +}
> > +
> > +vaddr_t directmap_virt_end __read_mostly;
>
> __ro_after_init? And moved ahead of the identifier, just like ...
>
> > +/*
> > + * Setup memory management
> > + *
> > + * RISC-V 64 has a large virtual address space (the minimum
> > supported
> > + * MMU mode is Sv39, which provides TBs of VA space).
> > + * In the case of RISC-V 64, the directmap and frametable are
> > mapped
> > + * starting from physical address 0 to simplify the page_to_mfn(),
> > + * mfn_to_page(), and maddr_to_virt() calculations, as there is no
> > need
> > + * to account for {directmap, frametable}_base_pdx in this setup.
> > + */
> > +void __init setup_mm(void)
>
> ... __init is placed e.g. here.
>
> It's also unclear why the variable needs to be non-static.
As it could be use then for some ASSERT(), for example, in
virt_to_page() as Arm or x86 do.
>
> > +{
> > + const struct membanks *banks = bootinfo_get_mem();
> > + paddr_t ram_end = 0;
> > + paddr_t ram_size = 0;
> > + unsigned int i;
> > +
> > + /*
> > + * We need some memory to allocate the page-tables used for
> > the directmap
> > + * mappings. But some regions may contain memory already
> > allocated
> > + * for other uses (e.g. modules, reserved-memory...).
> > + *
> > + * For simplicity, add all the free regions in the boot
> > allocator.
> > + */
> > + populate_boot_allocator();
> > +
> > + total_pages = 0;
> > +
> > + for ( i = 0; i < banks->nr_banks; i++ )
> > + {
> > + const struct membank *bank = &banks->bank[i];
> > + paddr_t bank_end = bank->start + bank->size;
> > +
> > + ram_size = ram_size + bank->size;
> > + ram_end = max(ram_end, bank_end);
> > + }
> > +
> > + setup_directmap_mappings(PFN_DOWN(ram_end));
>
> While you may want to use non-offset-ed mappings, I can't see how you
> can
> validly map just a single huge chunk, no matter whether there are
> holes
> in there. Such holes could hold MMIO regions, which I'm sure would
> require
> more careful mapping (to avoid cacheable accesses, or even
> speculative
> ones).
My intention was to avoid subraction of directmap_start ( = ram_start )
in maddr_to_virt() to have less operations during maddr to virt
calculation:
static inline void *maddr_to_virt(paddr_t ma)
{
/* Offset in the direct map, accounting for pdx compression */
unsigned long va_offset = maddr_to_directmapoff(ma);
ASSERT(va_offset < DIRECTMAP_SIZE);
return (void *)(DIRECTMAP_VIRT_START /* - directmap_start */ +
va_offset);
}
But it seems I don't have any chance to avoid that because of mentioned
by you reasons... Except probably to have a config which will hard code
RAM_START for each platform ( what on other hand will make Xen less
flexible in some point ).
Does it make sense to have a config instead of identifying ram_start in
runtime?
So I have to rework this part of code to look as:
for ( i = 0; i < banks->nr_banks; i++ )
{
const struct membank *bank = &banks->bank[i];
paddr_t bank_end = bank->start + bank->size;
ram_size = ram_size + bank->size;
ram_start = min(ram_start, bank->start);
ram_end = max(ram_end, bank_end);
setup_directmap_mappings(PFN_DOWN(bank->start),
PFN_DOWN(bank->size));
}
>
> > + total_pages = PFN_DOWN(ram_size);
>
> Imo the rounding down to page granularity needs to be done when
> ram_size
> is accumulated, such that partial pages simply won't be counted.
> Unless
> of course there's a guarantee that banks can never have partial pages
> at
> their start/end.
Hmm, good point. I thought that start/end is always properly aligned
but I can't find in device tree spec that it is guaranteed for memory
node, so it will really better rounding down after ram_size is
accumulated.
~ Oleksii
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |