|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH v3 4/9] xen/riscv: setup fixmap mapping
On Mon, 2024-08-05 at 18:14 +0200, Jan Beulich wrote:
> On 05.08.2024 18:02, oleksii.kurochko@xxxxxxxxx wrote:
> > On Mon, 2024-08-05 at 17:45 +0200, Jan Beulich wrote:
> > > On 05.08.2024 17:13, oleksii.kurochko@xxxxxxxxx wrote:
> > > > On Mon, 2024-07-29 at 15:35 +0200, Jan Beulich wrote:
> > > > > > + }
> > > > > > +
> > > > > > + BUG_ON(pte_is_valid(*pte));
> > > > > > +
> > > > > > + tmp = paddr_to_pte(LINK_TO_LOAD((unsigned
> > > > > > long)&xen_fixmap),
> > > > > > PTE_TABLE);
> > > > >
> > > > > I'm a little puzzled by the use of LINK_TO_LOAD() (and
> > > > > LOAD_TO_LINK()
> > > > > a
> > > > > little further up) here. Don't you have functioning __pa()
> > > > > and
> > > > > __va()?
> > > > Can __pa() and __va() be used in this case?
> > > >
> > > > According to comments for other architectures, these macros are
> > > > used
> > > > for converting between Xen heap virtual addresses (VA) and
> > > > machine
> > > > addresses (MA). I may have misunderstood what is meant by the
> > > > Xen
> > > > heap
> > > > in this context, but I'm not sure if xen_fixmap[] and page
> > > > tables
> > > > are
> > > > considered part of the Xen heap.
> > >
> > > I didn't check Arm, but on x86 virt_to_maddr() (underlying
> > > __pa())
> > > has
> > > special case code to also allow addresses within the Xen image
> > > (area).
> >
> > Yes, it is true for __virt_to_maddr:
> > static inline unsigned long __virt_to_maddr(unsigned long va)
> > {
> > ASSERT(va < DIRECTMAP_VIRT_END);
> > if ( va >= DIRECTMAP_VIRT_START )
> > va -= DIRECTMAP_VIRT_START;
> > else
> > {
> > BUILD_BUG_ON(XEN_VIRT_END - XEN_VIRT_START != GB(1));
> > /* Signed, so ((long)XEN_VIRT_START >> 30) fits in an
> > imm32. */
> > ASSERT(((long)va >> (PAGE_ORDER_1G + PAGE_SHIFT)) ==
> > ((long)XEN_VIRT_START >> (PAGE_ORDER_1G +
> > PAGE_SHIFT)));
> >
> > va += xen_phys_start - XEN_VIRT_START;
> > }
> > return (va & ma_va_bottom_mask) |
> > ((va << pfn_pdx_hole_shift) & ma_top_mask);
> > }
> >
> > But in case of __maddr_to_virt ( __va() ) it is using directmap
> > region:
> > static inline void *__maddr_to_virt(unsigned long ma)
> > {
> > ASSERT(pfn_to_pdx(ma >> PAGE_SHIFT) < (DIRECTMAP_SIZE >>
> > PAGE_SHIFT));
> > return (void *)(DIRECTMAP_VIRT_START +
> > ((ma & ma_va_bottom_mask) |
> > ((ma & ma_top_mask) >>
> > pfn_pdx_hole_shift)));
> > }
> >
> > But I have to use both __va() and __pa().
> > __va() inside cycle to find L1 page table:
> >
> > for ( i = HYP_PT_ROOT_LEVEL; i-- > 1; )
> > {
> > BUG_ON(!pte_is_valid(*pte));
> >
> > pte = (pte_t *)LOAD_TO_LINK(pte_to_paddr(*pte));
> > pte = &pte[pt_index(i, FIXMAP_ADDR(0))];
> > }
> >
> > __pa() to set a physical address of L0 page table:
> > tmp = paddr_to_pte(LINK_TO_LOAD((unsigned long)&xen_fixmap),
> > PTE_TABLE);
> > write_pte(pte, tmp);
>
> Hmm, then using at least LINK_TO_LOAD() is going to be unavoidable
> for the
> time being, I guess. Yet midterm I think they should disappear from
> here.
I think that in this specific case both LINK_TO_LOAD() and
LOAD_TO_LINK() should be used.
__va() -> __maddr_to_virt() can't be used here as it's calculation is
based on DIRECTMAP_VIRT_START thereby we will receive incorrect virtual
address of page table.
__pa() -> __virt_to_maddr() can't be used too because virtual address
of xen_fixmap will be always bigger then DIRECTMAP_VIRT_START ( as
XEN_VIRT_START > DIRECTMAP_VIRT_START ) thereby physical address (PA)
will be calculated based on DIRECTMAP_VIRT_START and incorrect PA of
xen_fixmap will be received.
It seems to me that it is only one choice we have at the moment ( and
probably in the case of fixmap mapping ) is to use
LINK_TO_LOAD()/LOAD_TO_LINK().
~ Oleksii
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |