[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v3 3/3] xen/riscv: introduce identity mapping



On Thu, 2023-07-20 at 12:29 +0200, Jan Beulich wrote:
> On 20.07.2023 10:28, Oleksii wrote:
> > On Thu, 2023-07-20 at 07:58 +0200, Jan Beulich wrote:
> > > On 19.07.2023 18:35, Oleksii wrote:
> > > > On Tue, 2023-07-18 at 17:03 +0200, Jan Beulich wrote:
> > > > > > +            unsigned long load_end = LINK_TO_LOAD(_end);
> > > > > > +            unsigned long pt_level_size =
> > > > > > XEN_PT_LEVEL_SIZE(i
> > > > > > -
> > > > > > 1);
> > > > > > +            unsigned long xen_size = ROUNDUP(load_end -
> > > > > > load_start, pt_level_size);
> > > > > > +            unsigned long page_entries_num = xen_size /
> > > > > > pt_level_size;
> > > > > > +
> > > > > > +            while ( page_entries_num-- )
> > > > > > +                pgtbl[index++].pte = 0;
> > > > > > +
> > > > > > +            break;
> > > > > 
> > > > > Unless there's a "not crossing a 2Mb boundary" guarantee
> > > > > somewhere
> > > > > that I've missed, this "break" is still too early afaict.
> > > > If I will add a '2 MB boundary check' for load_start and
> > > > linker_start
> > > > could it be an upstreamable solution?
> > > > 
> > > > Something like:
> > > >     if ( !IS_ALIGNED(load_start, MB(2) )
> > > >         printk("load_start should be 2Mb algined\n");
> > > > and
> > > >     ASSERT( !IS_ALIGNED(XEN_VIRT_START, MB(2) )
> > > > in xen.lds.S.
> > > 
> > > Arranging for the linked address to be 2Mb-aligned is certainly
> > > reasonable. Whether expecting the load address to also be depends
> > > on whether that can be arranged for (which in turn depends on
> > > boot
> > > loader behavior); it cannot be left to "luck".
> > Maybe I didn't quite understand you here, but if Xen has an
> > alignment
> > check of load address then boot loader has to follow the alignment
> > requirements of Xen. So it doesn't look as 'luck'.
> 
> That depends on (a) the alignment being properly expressed in the
> final binary and (b) the boot loader honoring it. (b) is what you
> double-check above, emitting a printk(), but I'm not sure about (a)
> being sufficiently enforced with just the ASSERT in the linker
> script. Maybe I'm wrong, though.
It should be enough for current purpose but probably I am missing
something.

> 
> > > > Then we will have completely different L0 tables for identity
> > > > mapping
> > > > and not identity and the code above will be correct.
> > > 
> > > As long as Xen won't grow beyond 2Mb total. Considering that at
> > > some point you may want to use large page mappings for .text,
> > > .data, and .rodata, that alone would grow Xen to 6 Mb (or really
> > > 8,
> > > assuming .init goes separate as well). That's leaving aside the
> > > realistic option of the mere sum of all sections being larger
> > > than
> > > 2. That said, even Arm64 with ACPI is still quite a bit below
> > > 2Mb.
> > > x86 is nearing 2.5 though in even a somewhat limited config;
> > > allyesconfig may well be beyond that already.
> > I am missing something about Xen size. Lets assume that Xen will be
> > mapped using only 4k pagees ( like it is done now ). Then if Xen
> > will
> > be more then 2Mb then only what will be changed is a number of page
> > tables so it is only question of changing of PGTBL_INITIAL_COUNT (
> > in
> > case of RISC-V).
> 
> And the way you do the tearing down of the transient 1:1 mapping.
It looks like removing 1:1 mapping will be the same.

Let's assume that the size of Xen is 4 MB and that load and linker
ranges don't overlap ( load and linker start address are 2Mb aligned ),
and the difference between them isn't bigger than 1 GB. Then one L2
page table, one L1 page table and two L0 page tables for identity
mapping, and two L0 page tables for non-identity mapping are needed.
Then at L1, we will have different indexes for load_start and
linker_start. So what will be needed is to clean two L1 page table
entries started from some index.

The only issue I see now is that it won't work in case if identity
mapping crosses a 1 Gb boundary. Then for identity mapping, it will be
needed two L1 page tables, and only one of them identity mapping will
be removed.

Do I miss anything else?
Wouldn't it be better to take into account that now?

> 
> > Could you please explain why Xen will grow to 6/8 MB in case of
> > larger
> > page mappings? In case of larger page mapping fewer tables are
> > needed.
> > For example, if we would like to use 2Mb pages then we will stop at
> > L1
> > page table and write an physical address to L1 page table entry
> > instead
> > of creating new L0 page table.
> 
> When you use 2Mb mappings, then you will want to use separate ones
> for .text, .rodata, and .data (plus perhaps .init), to express the
> differing permissions correctly. Consequently you'll need more
> virtual address space, but - yes - fewer page table pages. And of
> course the 1:1 unmapping logic will again be slightly different.
Thanks for clarification.

~ Oleksii




 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.