[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH] x86/altp2m: cleanup p2m_altp2m_lazy_copy
On Fri, Apr 12, 2019 at 1:44 PM Andrew Cooper <andrew.cooper3@xxxxxxxxxx> wrote: > > On 12/04/2019 17:39, Tamas K Lengyel wrote: > > The p2m_altp2m_lazy_copy is responsible for lazily populating an altp2m view > > when the guest traps out due to no EPT entry being present in the active > > view. > > Currently the function took several inputs that it didn't use and also > > locked/unlocked gfns when it didn't need to. > > > > Signed-off-by: Tamas K Lengyel <tamas@xxxxxxxxxxxxx> > > Re-reveiwing in light of the discussion. > > > diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c > > index b9bbb8f485..140c707348 100644 > > --- a/xen/arch/x86/mm/p2m.c > > +++ b/xen/arch/x86/mm/p2m.c > > @@ -2375,54 +2375,46 @@ bool_t p2m_switch_vcpu_altp2m_by_id(struct vcpu *v, > > unsigned int idx) > > * indicate that outer handler should handle fault > > */ > > > > -bool_t p2m_altp2m_lazy_copy(struct vcpu *v, paddr_t gpa, > > - unsigned long gla, struct npfec npfec, > > - struct p2m_domain **ap2m) > > +bool_t p2m_altp2m_lazy_copy(struct p2m_domain *ap2m, unsigned long gfn_l, > > + mfn_t hmfn, p2m_type_t hp2mt, p2m_access_t > > hp2ma, > > + unsigned int page_order) > > { > > - struct p2m_domain *hp2m = p2m_get_hostp2m(v->domain); > > - p2m_type_t p2mt; > > - p2m_access_t p2ma; > > - unsigned int page_order; > > - gfn_t gfn = _gfn(paddr_to_pfn(gpa)); > > + p2m_type_t ap2mt; > > + p2m_access_t ap2ma; > > unsigned long mask; > > - mfn_t mfn; > > + gfn_t gfn; > > + mfn_t amfn; > > int rv; > > > > - *ap2m = p2m_get_altp2m(v); > > - > > - mfn = get_gfn_type_access(*ap2m, gfn_x(gfn), &p2mt, &p2ma, > > - 0, &page_order); > > - __put_gfn(*ap2m, gfn_x(gfn)); > > - > > - if ( !mfn_eq(mfn, INVALID_MFN) ) > > - return 0; > > With the rearranged logic, and feeding in some of my cleanup series, you > want something like this: > > /* Entry not present in hp2m? Caller should handle the fault. */ > if ( mfn_eq(hmfn, INVALID_MFN) ) > return 0; > > There is no point doing the ap2m lookup if you're going to bail because > of the hp2m miss anyway. True. Good catch. > > > + p2m_lock(ap2m); > > > > - mfn = get_gfn_type_access(hp2m, gfn_x(gfn), &p2mt, &p2ma, > > - P2M_ALLOC, &page_order); > > - __put_gfn(hp2m, gfn_x(gfn)); > > + amfn = __get_gfn_type_access(ap2m, gfn_l, &ap2mt, &ap2ma, > > + 0, NULL, false); > > > > - if ( mfn_eq(mfn, INVALID_MFN) ) > > + /* Bail if entry is already in altp2m or there is no entry is hostp2m > > */ > > This comment then reduces to: > > /* Entry already present in ap2m? Caller should handle the fault. */ > > > + if ( !mfn_eq(amfn, INVALID_MFN) || mfn_eq(hmfn, INVALID_MFN) ) > > + { > > + p2m_unlock(ap2m); > > return 0; > > - > > - p2m_lock(*ap2m); > > + } > > > > /* > > * If this is a superpage mapping, round down both frame numbers > > * to the start of the superpage. > > */ > > mask = ~((1UL << page_order) - 1); > > - mfn = _mfn(mfn_x(mfn) & mask); > > - gfn = _gfn(gfn_x(gfn) & mask); > > + hmfn = _mfn(mfn_x(hmfn) & mask); > > + gfn = _gfn(gfn_l & mask); > > > > - rv = p2m_set_entry(*ap2m, gfn, mfn, page_order, p2mt, p2ma); > > - p2m_unlock(*ap2m); > > + rv = p2m_set_entry(ap2m, gfn, hmfn, page_order, hp2mt, hp2ma); > > + p2m_unlock(ap2m); > > > > if ( rv ) > > { > > gdprintk(XENLOG_ERR, > > - "failed to set entry for %#"PRIx64" -> %#"PRIx64" p2m > > %#"PRIx64"\n", > > - gfn_x(gfn), mfn_x(mfn), (unsigned long)*ap2m); > > - domain_crash(hp2m->domain); > > + "failed to set entry for %#"PRIx64" -> %#"PRIx64" p2m > > %#"PRIx64"\n", > > + gfn_l, mfn_x(hmfn), (unsigned long)ap2m); > > This should be a gprintk(), not a gdprintk(), because there is nothing > more infuriating in release builds than a domain_crash() with no > qualification. > > Printing the virtual address of the ap2m is also useless as far as > diagnostics go. How about something like: > > "Failed to lazy copy gfn %"PRI_gfn" -> %"PRI_mfn" (type %u, access %u, > order %u) for view %u\n" > > if there is an easy way to access the view id. This at least stands a > chance of being useful when debugging. Yeap, makes sense. > > > + domain_crash(ap2m->domain); > > } > > > > return 1; > > diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h > > index 2801a8ccca..c25e2a3cd8 100644 > > --- a/xen/include/asm-x86/p2m.h > > +++ b/xen/include/asm-x86/p2m.h > > @@ -867,8 +867,9 @@ void p2m_altp2m_check(struct vcpu *v, uint16_t idx); > > void p2m_flush_altp2m(struct domain *d); > > > > /* Alternate p2m paging */ > > -bool_t p2m_altp2m_lazy_copy(struct vcpu *v, paddr_t gpa, > > - unsigned long gla, struct npfec npfec, struct p2m_domain **ap2m); > > +bool_t p2m_altp2m_lazy_copy(struct p2m_domain *ap2m, unsigned long gfn_l, > > + mfn_t hmfn, p2m_type_t hp2mt, p2m_access_t > > hp2ma, > > + unsigned int page_order); > > Can we switch to bool while changing everything else as well? Sure. Thanks, Tamas _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxxx https://lists.xenproject.org/mailman/listinfo/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |