[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH v2 17/17] xen/riscv: add support of page lookup by GFN
On 10.06.2025 15:05, Oleksii Kurochko wrote: > Introduce helper functions for safely querying the P2M (physical-to-machine) > mapping: > - add p2m_read_lock(), p2m_read_unlock(), and p2m_is_locked() for managing > P2M lock state. > - Implement p2m_get_entry() to retrieve mapping details for a given GFN, > including MFN, page order, and validity. > - Add p2m_lookup() to encapsulate read-locked MFN retrieval. > - Introduce p2m_get_page_from_gfn() to convert a GFN into a page_info > pointer, acquiring a reference to the page if valid. > > Implementations are based on Arm's functions with some minor modifications: > - p2m_get_entry(): > - Reverse traversal of page tables, as RISC-V uses the opposite order > compared to Arm. > - Removed the return of p2m_access_t from p2m_get_entry() since > mem_access_settings is not introduced for RISC-V. Didn't I see uses of p2m_access in earlier patches? If you don't mean to have that, then please consistently {every,no}where. > - Updated BUILD_BUG_ON() to check using the level 0 mask, which corresponds > to Arm's THIRD_MASK. > - Replaced open-coded bit shifts with the BIT() macro. > - Other minor changes, such as using RISC-V-specific functions to validate > P2M PTEs, and replacing Arm-specific GUEST_* macros with their RISC-V > equivalents. > - p2m_get_page_from_gfn(): > - Removed p2m_is_foreign() and related logic, as this functionality is not > implemented for RISC-V. Yet I expect you'll need this, sooner or later. > --- a/xen/arch/riscv/include/asm/p2m.h > +++ b/xen/arch/riscv/include/asm/p2m.h > @@ -184,6 +184,24 @@ static inline int p2m_is_write_locked(struct p2m_domain > *p2m) > return rw_is_write_locked(&p2m->lock); > } > > +static inline void p2m_read_lock(struct p2m_domain *p2m) > +{ > + read_lock(&p2m->lock); > +} > + > +static inline void p2m_read_unlock(struct p2m_domain *p2m) > +{ > + read_unlock(&p2m->lock); > +} > + > +static inline int p2m_is_locked(struct p2m_domain *p2m) > +{ > + return rw_is_locked(&p2m->lock); > +} > + > +struct page_info *p2m_get_page_from_gfn(struct domain *d, gfn_t gfn, > + p2m_type_t *t); Once again I don't think you can pass struct domain * here, when in the long run a domain can have multiple P2Ms. > --- a/xen/arch/riscv/p2m.c > +++ b/xen/arch/riscv/p2m.c > @@ -1055,3 +1055,134 @@ int guest_physmap_add_entry(struct domain *d, > { > return p2m_insert_mapping(d, gfn, (1 << page_order), mfn, t); > } > + > +/* > + * Get the details of a given gfn. > + * > + * If the entry is present, the associated MFN will be returned and the > + * access and type filled up. The page_order will correspond to the You removed p2m_access_t * from the parameters; you need to also update the comment then accordingly. > + * order of the mapping in the page table (i.e it could be a superpage). > + * > + * If the entry is not present, INVALID_MFN will be returned and the > + * page_order will be set according to the order of the invalid range. > + * > + * valid will contain the value of bit[0] (e.g valid bit) of the > + * entry. > + */ > +static mfn_t p2m_get_entry(struct p2m_domain *p2m, gfn_t gfn, > + p2m_type_t *t, > + unsigned int *page_order, > + bool *valid) > +{ > + paddr_t addr = gfn_to_gaddr(gfn); > + unsigned int level = 0; > + pte_t entry, *table; > + int rc; > + mfn_t mfn = INVALID_MFN; > + p2m_type_t _t; Please no local variables with leading underscores. In x86 we commonly name such variables p2mt. > + DECLARE_OFFSETS(offsets, addr); This is the sole use of "addr". Is such a local variable really worth having? > + ASSERT(p2m_is_locked(p2m)); > + BUILD_BUG_ON(XEN_PT_LEVEL_MAP_MASK(0) != PAGE_MASK); > + > + /* Allow t to be NULL */ > + t = t ?: &_t; > + > + *t = p2m_invalid; > + > + if ( valid ) > + *valid = false; > + > + /* XXX: Check if the mapping is lower than the mapped gfn */ > + > + /* This gfn is higher than the highest the p2m map currently holds */ > + if ( gfn_x(gfn) > gfn_x(p2m->max_mapped_gfn) ) > + { > + for ( level = P2M_ROOT_LEVEL; level ; level-- ) Nit: Stray blank before the 2nd semicolon. (Again at least once below.) > + if ( (gfn_x(gfn) & (XEN_PT_LEVEL_MASK(level) >> PAGE_SHIFT)) > > + gfn_x(p2m->max_mapped_gfn) ) > + break; > + > + goto out; > + } > + > + table = p2m_get_root_pointer(p2m, gfn); > + > + /* > + * the table should always be non-NULL because the gfn is below > + * p2m->max_mapped_gfn and the root table pages are always present. > + */ > + if ( !table ) > + { > + ASSERT_UNREACHABLE(); > + level = P2M_ROOT_LEVEL; > + goto out; > + } > + > + for ( level = P2M_ROOT_LEVEL; level ; level-- ) > + { > + rc = p2m_next_level(p2m, true, level, &table, offsets[level]); > + if ( (rc == GUEST_TABLE_MAP_NONE) && (rc != GUEST_TABLE_MAP_NOMEM) ) This condition looks odd. As written the rhs of the && is redundant. > + goto out_unmap; > + else if ( rc != GUEST_TABLE_NORMAL ) As before, no real need for "else" in such cases. > + break; > + } > + > + entry = table[offsets[level]]; > + > + if ( p2me_is_valid(p2m, entry) ) > + { > + *t = p2m_type_radix_get(p2m, entry); If the incoming argument is NULL, the somewhat expensive radix tree lookup is unnecessary here. > + mfn = pte_get_mfn(entry); > + /* > + * The entry may point to a superpage. Find the MFN associated > + * to the GFN. > + */ > + mfn = mfn_add(mfn, > + gfn_x(gfn) & (BIT(XEN_PT_LEVEL_ORDER(level), UL) - 1)); > + > + if ( valid ) > + *valid = pte_is_valid(entry); Interesting. Why not the P2M counterpart of the function? Yes, the comment ahead of the function says so, but I don't see why the valid bit suddenly is relevant here (besides the P2M type). > + } > + > +out_unmap: > + unmap_domain_page(table); > + > +out: Nit: Style (bot labels). > + if ( page_order ) > + *page_order = XEN_PT_LEVEL_ORDER(level); > + > + return mfn; > +} > + > +static mfn_t p2m_lookup(struct domain *d, gfn_t gfn, p2m_type_t *t) pointer-to-const for the 1st arg? But again more likely struct p2m_domain * anyway? > +{ > + mfn_t mfn; > + struct p2m_domain *p2m = p2m_get_hostp2m(d); > + > + p2m_read_lock(p2m); > + mfn = p2m_get_entry(p2m, gfn, t, NULL, NULL); > + p2m_read_unlock(p2m); > + > + return mfn; > +} > + > +struct page_info *p2m_get_page_from_gfn(struct domain *d, gfn_t gfn, Same here - likely you mean struct p2m_domain * instead. > + p2m_type_t *t) > +{ > + p2m_type_t p2mt = {0}; Why a compound initializer for something that isn't a compound object? And why plain 0 for something that is an enumerated type? > + struct page_info *page; > + > + mfn_t mfn = p2m_lookup(d, gfn, &p2mt); > + > + if ( t ) > + *t = p2mt; What's wrong with passing t directly to p2m_lookup()? Jan
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |