[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH v4 2/7] xen/riscv: set up fixmap mappings
On Tue, 2024-08-13 at 10:22 +0200, Jan Beulich wrote: > On 09.08.2024 18:19, Oleksii Kurochko wrote: > > --- a/xen/arch/riscv/include/asm/config.h > > +++ b/xen/arch/riscv/include/asm/config.h > > @@ -74,6 +74,14 @@ > > #error "unsupported RV_STAGE1_MODE" > > #endif > > > > +#define XEN_VIRT_SIZE MB(2) > > + > > +#define BOOT_FDT_VIRT_START (XEN_VIRT_START + XEN_VIRT_SIZE) > > +#define BOOT_FDT_VIRT_SIZE MB(4) > > + > > +#define FIXMAP_BASE (BOOT_FDT_VIRT_START + > > BOOT_FDT_VIRT_SIZE) > > +#define FIXMAP_ADDR(n) (FIXMAP_BASE + (n) * PAGE_SIZE) > > Just to confirm: Did you consider leaving gaps between the regions, > but > then discarded that idea for whatever reason? It's not like you're > tight > on address space. No, I would like to leave gaps between the regions. I have a slot gap where it is possible, inside a slot I decided just not to add this gap without any specific reason TBH. I can add some gap ( for example, 0x100000 ) if it would be better and consistent. > > As to FIXMAP_ADDR() - wouldn't that be a better fit in fixmap.h? Sure, it would more correct place for this macros. > > > --- /dev/null > > +++ b/xen/arch/riscv/include/asm/fixmap.h > > @@ -0,0 +1,44 @@ > > +/* SPDX-License-Identifier: GPL-2.0 */ > > +/* > > + * fixmap.h: compile-time virtual memory allocation > > + */ > > +#ifndef ASM_FIXMAP_H > > +#define ASM_FIXMAP_H > > + > > +#include <xen/bug.h> > > +#include <xen/page-size.h> > > +#include <xen/pmap.h> > > + > > +#include <asm/page.h> > > + > > +/* Fixmap slots */ > > +#define FIX_PMAP_BEGIN (0) /* Start of PMAP */ > > +#define FIX_PMAP_END (FIX_PMAP_BEGIN + NUM_FIX_PMAP - 1) /* End of > > PMAP */ > > +#define FIX_MISC (FIX_PMAP_END + 1) /* Ephemeral mappings of > > hardware */ > > + > > +#define FIX_LAST (FIX_MISC + 1) /* +1 means a guard slot */ > > As per my comment on the earlier version: No, I don't think this > arranges > for a guard slot. It merely arranges for FIX_MISC to not trigger the > BUG_ON() in virt_to_fix(). > > > --- a/xen/arch/riscv/include/asm/page.h > > +++ b/xen/arch/riscv/include/asm/page.h > > @@ -81,6 +81,12 @@ static inline void flush_page_to_ram(unsigned > > long mfn, bool sync_icache) > > BUG_ON("unimplemented"); > > } > > > > +/* Write a pagetable entry. */ > > +static inline void write_pte(pte_t *p, pte_t pte) > > +{ > > + *p = pte; > > +} > > No use of write_atomic() here? And no read_pte() counterpart right > away (then > also properly using read_atomic())? I wanted to avoid the fence before "*p=pte" which in case of write_atomic() will be present. Won't it be enough to use here WRITE_ONCE()? > > > @@ -191,6 +195,45 @@ static bool __init > > check_pgtbl_mode_support(struct mmu_desc *mmu_desc, > > return is_mode_supported; > > } > > > > +void __init setup_fixmap_mappings(void) > > +{ > > + pte_t *pte, tmp; > > + unsigned int i; > > + > > + BUILD_BUG_ON(FIX_LAST >= PAGETABLE_ENTRIES); > > + > > + pte = &stage1_pgtbl_root[pt_index(HYP_PT_ROOT_LEVEL, > > FIXMAP_ADDR(0))]; > > + > > + /* > > + * In RISC-V page table levels are numbered from Lx to L0 > > where > > + * x is the highest page table level for currect MMU mode ( > > for example, > > + * for Sv39 has 3 page tables so the x = 2 (L2 -> L1 -> L0) ). > > + * > > + * In this cycle we want to find L1 page table because as L0 > > page table > > + * xen_fixmap[] will be used. > > + */ > > + for ( i = HYP_PT_ROOT_LEVEL; i-- > 1; ) > > + { > > + BUG_ON(!pte_is_valid(*pte)); > > + > > + pte = (pte_t *)LOAD_TO_LINK(pte_to_paddr(*pte)); > > + pte = &pte[pt_index(i, FIXMAP_ADDR(0))]; > > + } > > + > > + BUG_ON(pte_is_valid(*pte)); > > + > > + tmp = paddr_to_pte(LINK_TO_LOAD((unsigned long)&xen_fixmap), > > PTE_TABLE); > > + write_pte(pte, tmp); > > + > > + RISCV_FENCE(rw, rw); > > In the changes section you say "r,r", and I was wondering about that. > I > take it that "rw,rw" is really what's needed here. It was typo. I will update the change log. Thanks. ~ Oleksii
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |