|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH v2 15/18] x86/mm: introduce a per-vCPU mapcache when using ASI
On Fri, Jan 10, 2025 at 04:19:03PM +0000, Alejandro Vallejo wrote:
> On Fri Jan 10, 2025 at 3:02 PM GMT, Roger Pau Monné wrote:
> > On Thu, Jan 09, 2025 at 03:08:15PM +0000, Alejandro Vallejo wrote:
> > > On Wed Jan 8, 2025 at 2:26 PM GMT, Roger Pau Monne wrote:
> > > > When using a unique per-vCPU root page table the per-domain region
> > > > becomes
> > > > per-vCPU, and hence the mapcache is no longer shared between all vCPUs
> > > > of a
> > > > domain. Introduce per-vCPU mapcache structures, and modify
> > > > map_domain_page()
> > > > to create per-vCPU mappings when possible. Note the lock is also not
> > > > needed
> > > > with using per-vCPU map caches, as the structure is no longer shared.
> > > >
> > > > This introduces some duplication in the domain and vcpu structures, as
> > > > both
> > > > contain a mapcache field to support running with and without per-vCPU
> > > > page-tables.
> > > >
> > > > Signed-off-by: Roger Pau Monné <roger.pau@xxxxxxxxxx>
> > > > ---
> > > > xen/arch/x86/domain_page.c | 90 ++++++++++++++++++++-----------
> > > > xen/arch/x86/include/asm/domain.h | 20 ++++---
> > > > 2 files changed, 71 insertions(+), 39 deletions(-)
> > > >
> > > > diff --git a/xen/arch/x86/domain_page.c b/xen/arch/x86/domain_page.c
> > > > index 1372be20224e..65900d6218f8 100644
> > > > --- a/xen/arch/x86/domain_page.c
> > > > +++ b/xen/arch/x86/domain_page.c
> > > > @@ -74,7 +74,9 @@ void *map_domain_page(mfn_t mfn)
> > > > struct vcpu *v;
> > > > struct mapcache_domain *dcache;
> > > > struct mapcache_vcpu *vcache;
> > > > + struct mapcache *cache;
> > > > struct vcpu_maphash_entry *hashent;
> > > > + struct domain *d;
> > > >
> > > > #ifdef NDEBUG
> > > > if ( mfn_x(mfn) <= PFN_DOWN(__pa(HYPERVISOR_VIRT_END - 1)) )
> > > > @@ -85,9 +87,12 @@ void *map_domain_page(mfn_t mfn)
> > > > if ( !v || !is_pv_vcpu(v) )
> > > > return mfn_to_virt(mfn_x(mfn));
> > > >
> > > > - dcache = &v->domain->arch.pv.mapcache;
> > > > + d = v->domain;
> > > > + dcache = &d->arch.pv.mapcache;
> > > > vcache = &v->arch.pv.mapcache;
> > > > - if ( !dcache->inuse )
> > > > + cache = d->arch.vcpu_pt ? &v->arch.pv.mapcache.cache
> > > > + : &d->arch.pv.mapcache.cache;
> > > > + if ( !cache->inuse )
> > > > return mfn_to_virt(mfn_x(mfn));
> > > >
> > > > perfc_incr(map_domain_page_count);
> > > > @@ -98,17 +103,18 @@ void *map_domain_page(mfn_t mfn)
> > > > if ( hashent->mfn == mfn_x(mfn) )
> > > > {
> > > > idx = hashent->idx;
> > > > - ASSERT(idx < dcache->entries);
> > > > + ASSERT(idx < cache->entries);
> > > > hashent->refcnt++;
> > > > ASSERT(hashent->refcnt);
> > > > ASSERT(mfn_eq(l1e_get_mfn(MAPCACHE_L1ENT(idx)), mfn));
> > > > goto out;
> > > > }
> > > >
> > > > - spin_lock(&dcache->lock);
> > > > + if ( !d->arch.vcpu_pt )
> > > > + spin_lock(&dcache->lock);
> > >
> > > Hmmm. I wonder whether we might not want a nospec here...
> >
> > Not sure TBH, we have other instances of conditional locking that
> > doesn't use nospec(). That said I'm not claiming those are correct.
> > Shouldn't people that care about this kind of speculation into
> > critical regions just use CONFIG_SPECULATIVE_HARDEN_LOCK?
> >
> > Thanks, Roger.
>
> Actually, to avoid the double lfence, I think this would work too while
> avoiding the lfence unconditionally when CONFIG_SPECULATIVE_HARDEN_LOCK is not
> set.
>
> if ( !d->arch.vcpu_pt )
> spin_lock(&dcache->lock);
> else
> block_lock_speculation();
We have a spin_lock_if() helper to do that. I will use it here.
Thanks, Roger.
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |