[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH v3 4/9] xen/riscv: setup fixmap mapping
On 30.07.2024 13:25, oleksii.kurochko@xxxxxxxxx wrote: > On Tue, 2024-07-30 at 09:49 +0200, Jan Beulich wrote: >> On 29.07.2024 18:11, oleksii.kurochko@xxxxxxxxx wrote: >>> On Mon, 2024-07-29 at 15:35 +0200, Jan Beulich wrote: >>>> On 24.07.2024 17:31, Oleksii Kurochko wrote: >>>>> @@ -81,6 +82,14 @@ static inline void >>>>> flush_page_to_ram(unsigned >>>>> long mfn, bool sync_icache) >>>>> BUG_ON("unimplemented"); >>>>> } >>>>> >>>>> +/* Write a pagetable entry. */ >>>>> +static inline void write_pte(pte_t *p, pte_t pte) >>>>> +{ >>>>> + RISCV_FENCE(rw, rw); >>>>> + *p = pte; >>>>> + RISCV_FENCE(rw, rw); >>>>> +} >>>> >>>> Why the first of the two fences? >>> To ensure that writes have completed with the old mapping. >> >> Wait: There can certainly be uncompleted writes, but those must have >> walked the page tables already, or else a (synchronous) fault could >> not be delivered on the originating store instruction. Or am I >> misunderstanding how paging (and associated faults) work on RISC-V? > I am not sure that I correctly understand the part regarding ( > synchronous ) fault. Could you please come up with an example? > > If something during page table walk will go wrong then a fault will be > raised. On the very insn, with subsequent insns not having started executing (from an architectural perspective, i.e. leaving aside speculation). That is what my use of "synchronous" meant. > My initial intension was to be sure if I will be writing to an actively > in-use page table that other cores can see at the time then fences > above are required. It is not the case for now as we have only one CPUs > but I assume that it will be a case when SMP will be enabled and more > then one CPU will be able to work with the same page table. Would that first fence really help there? The other CPU could use the page tables in the window between the fence and the write. My understanding of the need for fences is for them to be used at times where ordering of memory accesses matters. For the moment I don't see this as an aspect for the 1st fence here, but I may be overlooking something. >>>>> + tmp = paddr_to_pte(LINK_TO_LOAD((unsigned >>>>> long)&xen_fixmap), >>>>> PTE_TABLE); >>>> >>>> I'm a little puzzled by the use of LINK_TO_LOAD() (and >>>> LOAD_TO_LINK() >>>> a >>>> little further up) here. Don't you have functioning __pa() and >>>> __va()? >>> No, they haven't been introduced yet. >> >> So you're building up more technical debt, as the use of said two >> constructs really should be limited to very early setup. Aiui once >> you have functioning __va() / __pa() the code here would want >> changing? > > Ideally yes, it would want to changed. > > Would it be the better solution to define __va() and __pa() using > LOAD_TO_LINK()/LINK_TO_LOAD() so when "real" __va() and __pa() will be > ready so only definitions of __va() and __pa() should be changed. Well, that's something you're in a better position to answer, as it depends on the ordering of subsequent work of yours. Jan
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |