[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH v2 06/17] xen/riscv: add root page table allocation
On 01.07.2025 16:02, Oleksii Kurochko wrote: > On 7/1/25 12:27 PM, Jan Beulich wrote: >> On 01.07.2025 11:44, Oleksii Kurochko wrote: >>> On 7/1/25 8:29 AM, Jan Beulich wrote: >>>> On 30.06.2025 18:18, Oleksii Kurochko wrote: >>>>> On 6/30/25 5:22 PM, Jan Beulich wrote: >>>>>> On 10.06.2025 15:05, Oleksii Kurochko wrote: >>>>>>> --- a/xen/arch/riscv/p2m.c >>>>>>> +++ b/xen/arch/riscv/p2m.c >>>>>>> @@ -41,6 +41,91 @@ void p2m_write_unlock(struct p2m_domain *p2m) >>>>>>> write_unlock(&p2m->lock); >>>>>>> } >>>>>>> >>>>>>> +static void clear_and_clean_page(struct page_info *page) >>>>>>> +{ >>>>>>> + clean_dcache_va_range(page, PAGE_SIZE); >>>>>>> + clear_domain_page(page_to_mfn(page)); >>>>>>> +} >>>>>> A function of this name can, imo, only clear and then clean. Question is >>>>>> why >>>>>> it's the other way around, and what the underlying requirement is for the >>>>>> cleaning part to be there in the first place. Maybe that's obvious for a >>>>>> RISC-V person, but it's entirely non-obvious to me (Arm being different >>>>>> in >>>>>> this regard because of running with caches disabled at certain points in >>>>>> time). >>>>> You're right, the current name|clear_and_clean_page()| implies that >>>>> clearing >>>>> should come before cleaning, which contradicts the current implementation. >>>>> The intent here is to ensure that the page contents are consistent in RAM >>>>> (not just in cache) before use by other entities (guests or devices). >>>>> >>>>> The clean must follow the clear — so yes, the order needs to be reversed. >>>> What you don't address though - why's the cleaning needed in the first >>>> place? >>> If we clean the data cache first, we flush the d-cache and then use the >>> page to >>> perform the clear operation. As a result, the "cleared" value will be >>> written into >>> the d-cache. To avoid polluting the d-cache with the "cleared" value, the >>> correct >>> sequence is to clear the page first, then clean the data cache. >> If you want to avoid cache pollution, I think you'd need to use a form of >> stores >> which simply bypass the cache. Yet then - why would this matter here, but not >> elsewhere? Wouldn't you better leave such to the hardware, unless you can >> prove >> a (meaningful) performance gain? > > I thought about a case when IOMMU doesn't support coherent walks and p2m > tables are > shared between CPU and IOMMU. Then my understanding is: > - clear_page(p) just zero-ing a page in a CPU's cache. > - But IOMMU can see old data or uninitialized, if they still in cache. > - So, it is need to do clean_cache() to writeback data from cache to RAM, > before a > page will be used as a part of page table for IOMMU. Okay, so this is purely about something that doesn't matter at all for now (until IOMMU support is introduced). Fair enough then to play safe from the beginning. >>>>>>> + unsigned int nr_pages = _AC(1,U) << order; >>>>>> Nit (style): Missing blank after comma. >>>>> I've changed that to BIT(order, U) >>>>> >>>>>>> + /* Return back nr_pages necessary for p2m root table. */ >>>>>>> + >>>>>>> + if ( ACCESS_ONCE(d->arch.paging.p2m_total_pages) < nr_pages ) >>>>>>> + panic("Specify more xen,domain-p2m-mem-mb\n"); >>>>>> You shouldn't panic() in anything involved in domain creation. You want >>>>>> to >>>>>> return NULL in this case. >>>>> It makes sense in this case just to return NULL. >>>>> >>>>>> Further, to me the use of "more" looks misleading here. Do you perhaps >>>>>> mean >>>>>> "larger" or "bigger"? >>>>>> >>>>>> This also looks to be happening without any lock held. If that's >>>>>> intentional, >>>>>> I think the "why" wants clarifying in a code comment. >>>>> Agree, returning back pages necessary for p2m root table should be done >>>>> under >>>>> spin_lock(&d->arch.paging.lock). >>>> Which should be acquired at the paging_*() layer then, not at the p2m_*() >>>> layer. >>>> (As long as you mean to have that separation, that is. See the earlier >>>> discussion >>>> on that matter.) >>> Then partly p2m_set_allocation() should be moved to paging_*() too. >> Not exactly sure what you mean. On x86 at least the paging layer part of >> the function is pretty slim. > > I meant that part of code which is spin_lock(&d->arch.paging.lock); ... > spin_unlock(&d->arch.paging.lock) > in function p2m_set_allocation() should be moved somewhere to paging_*() > layer for the same logic as you > suggested to move part of p2m_allocate_root()'s code which is guarded by > d->arch.paging.lock to > paging_*() layer. Yes, of course. Jan
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |