[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v3 3/3] xen/riscv: introduce identity mapping



On Tue, 2023-07-18 at 17:03 +0200, Jan Beulich wrote:
> On 17.07.2023 16:40, Oleksii Kurochko wrote:
> > The way how switch to virtual address was implemented in the
> > commit e66003e7be ("xen/riscv: introduce setup_initial_pages")
> > isn't safe enough as:
> > * enable_mmu() depends on hooking all exceptions
> >   and pagefault.
> > * Any exception other than pagefault, or not taking a pagefault
> >   causes it to malfunction, which means you will fail to boot
> >   depending on where Xen was loaded into memory.
> > 
> > Instead of the proposed way of switching to virtual addresses was
> > decided to use identity mapping of the entrire Xen and after
> > switching to virtual addresses identity mapping is removed from
> > page-tables.
> > Since it is not easy to keep track where the identity map was
> > mapped,
> > so we will look for the top-most entry exclusive to the identity
> > map and remove it.
> 
> Doesn't this paragraph need adjustment now?
It should be. Thanks. Ill update it in the next patch version.

> 
> > --- a/xen/arch/riscv/mm.c
> > +++ b/xen/arch/riscv/mm.c
> > @@ -25,6 +25,12 @@ unsigned long __ro_after_init phys_offset;
> >  #define LOAD_TO_LINK(addr) ((unsigned long)(addr) - phys_offset)
> >  #define LINK_TO_LOAD(addr) ((unsigned long)(addr) + phys_offset)
> >  
> > +/*
> > + * Should be removed as soon as enough headers will be merged for
> > inclusion of
> > + * <xen/lib.h>.
> > + */
> > +#define ARRAY_SIZE(arr)                (sizeof(arr) /
> > sizeof((arr)[0]))
> 
> Like said to Shawn for PPC in [1], there's now a pretty easy way to
> get this macro available for use here without needing to include
> xen/lib.h.
> 
> [1]
> https://lists.xen.org/archives/html/xen-devel/2023-07/msg01081.html
Great. It'll be very useful for me so I'll add dependency on the patch
where ARRAY_SIZE and ROUNDUP are moved to <xen/macros.h>.

> 
> > @@ -35,8 +41,10 @@ unsigned long __ro_after_init phys_offset;
> >   *
> >   * It might be needed one more page table in case when Xen load
> > address
> >   * isn't 2 MB aligned.
> > + *
> > + * CONFIG_PAGING_LEVELS page tables are needed for identity
> > mapping.
> >   */
> > -#define PGTBL_INITIAL_COUNT ((CONFIG_PAGING_LEVELS - 1) + 1)
> > +#define PGTBL_INITIAL_COUNT (CONFIG_PAGING_LEVELS * 2 + 1)
> 
> Where did the "- 1" go?
My fault. Should be:
  #define PGTBL_INITIAL_COUNT ((CONFIG_PAGING_LEVELS * 2  - 1) + 1)

> 
> > @@ -255,25 +266,40 @@ void __init noreturn noinline enable_mmu()
> >      csr_write(CSR_SATP,
> >                PFN_DOWN((unsigned long)stage1_pgtbl_root) |
> >                RV_STAGE1_MODE << SATP_MODE_SHIFT);
> > +}
> >  
> > -    asm volatile ( ".p2align 2" );
> > - mmu_is_enabled:
> > -    /*
> > -     * Stack should be re-inited as:
> > -     * 1. Right now an address of the stack is relative to load
> > time
> > -     *    addresses what will cause an issue in case of load start
> > address
> > -     *    isn't equal to linker start address.
> > -     * 2. Addresses in stack are all load time relative which can
> > be an
> > -     *    issue in case when load start address isn't equal to
> > linker
> > -     *    start address.
> > -     *
> > -     * We can't return to the caller because the stack was reseted
> > -     * and it may have stash some variable on the stack.
> > -     * Jump to a brand new function as the stack was reseted
> > -     */
> > +void __init remove_identity_mapping(void)
> > +{
> > +    unsigned int i;
> > +    pte_t *pgtbl;
> > +    unsigned int index, xen_index;
> > +    unsigned long load_start = LINK_TO_LOAD(_start);
> > +
> > +    for ( pgtbl = stage1_pgtbl_root, i = CONFIG_PAGING_LEVELS; i;
> > i-- )
> > +    {
> > +        index = pt_index(i - 1, load_start);
> > +        xen_index = pt_index(i - 1, XEN_VIRT_START);
> > +
> > +        if ( index != xen_index )
> > +        {
> > +            /* remove after it will be possible to include
> > <xen/lib.h> */
> > +            #define ROUNDUP(x, a) (((x) + (a) - 1) & ~((a) - 1))
> 
> ROUNDUP() is even part of the patch that I've submitted already.
> 
> > +            unsigned long load_end = LINK_TO_LOAD(_end);
> > +            unsigned long pt_level_size = XEN_PT_LEVEL_SIZE(i -
> > 1);
> > +            unsigned long xen_size = ROUNDUP(load_end -
> > load_start, pt_level_size);
> > +            unsigned long page_entries_num = xen_size /
> > pt_level_size;
> > +
> > +            while ( page_entries_num-- )
> > +                pgtbl[index++].pte = 0;
> > +
> > +            break;
> 
> Unless there's a "not crossing a 2Mb boundary" guarantee somewhere
> that I've missed, this "break" is still too early afaict.
You are right. I have to re-write this part again.
Thanks.

~ Oleksii



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.