[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 1/6] xen: extend XEN_DOMCTL_memory_mapping to handle cacheability



On Tue, 26 Feb 2019, Julien Grall wrote:
> Hi,
> 
> On 26/02/2019 23:07, Stefano Stabellini wrote:
> > Reuse the existing padding field to pass cacheability information about
> > the memory mapping, specifically, whether the memory should be mapped as
> > normal memory or as device memory (this is what we have today).
> > 
> > Add a cacheability parameter to map_mmio_regions. 0 means device
> > memory, which is what we have today.
> > 
> > On ARM, map device memory as p2m_mmio_direct_dev (as it is already done
> > today) and normal memory as p2m_ram_rw.
> > 
> > On x86, return error if the cacheability requested is not device memory.
> > 
> > Signed-off-by: Stefano Stabellini <stefanos@xxxxxxxxxx>
> > CC: JBeulich@xxxxxxxx
> > CC: andrew.cooper3@xxxxxxxxxx
> > ---
> >   xen/arch/arm/gic-v2.c            |  3 ++-
> >   xen/arch/arm/p2m.c               | 19 +++++++++++++++++--
> >   xen/arch/arm/platforms/exynos5.c |  4 ++--
> >   xen/arch/arm/platforms/omap5.c   |  8 ++++----
> >   xen/arch/arm/vgic-v2.c           |  2 +-
> >   xen/arch/arm/vgic/vgic-v2.c      |  2 +-
> >   xen/arch/x86/hvm/dom0_build.c    |  7 +++++--
> >   xen/arch/x86/mm/p2m.c            |  6 +++++-
> >   xen/common/domctl.c              |  8 +++++---
> >   xen/drivers/vpci/header.c        |  3 ++-
> >   xen/include/public/domctl.h      |  4 +++-
> >   xen/include/xen/p2m-common.h     |  3 ++-
> >   12 files changed, 49 insertions(+), 20 deletions(-)
> > 
> > diff --git a/xen/arch/arm/gic-v2.c b/xen/arch/arm/gic-v2.c
> > index e7eb01f..1ea3da2 100644
> > --- a/xen/arch/arm/gic-v2.c
> > +++ b/xen/arch/arm/gic-v2.c
> > @@ -690,7 +690,8 @@ static int gicv2_map_hwdown_extra_mappings(struct 
> > domain *d)
> >   
> >           ret = map_mmio_regions(d, gaddr_to_gfn(v2m_data->addr),
> >                                  PFN_UP(v2m_data->size),
> > -                               maddr_to_mfn(v2m_data->addr));
> > +                               maddr_to_mfn(v2m_data->addr),
> > +                               CACHEABILITY_DEVMEM);
> >           if ( ret )
> >           {
> >               printk(XENLOG_ERR "GICv2: Map v2m frame to d%d failed.\n",
> > diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
> > index 30cfb01..5b8fcc5 100644
> > --- a/xen/arch/arm/p2m.c
> > +++ b/xen/arch/arm/p2m.c
> > @@ -1068,9 +1068,24 @@ int unmap_regions_p2mt(struct domain *d,
> >   int map_mmio_regions(struct domain *d,
> >                        gfn_t start_gfn,
> >                        unsigned long nr,
> > -                     mfn_t mfn)
> > +                     mfn_t mfn,
> > +                     uint32_t cache_policy)
> >   {
> > -    return p2m_insert_mapping(d, start_gfn, nr, mfn, p2m_mmio_direct_dev);
> > +    p2m_type_t t;
> > +
> > +    switch ( cache_policy )
> > +    {
> > +    case CACHEABILITY_MEMORY:
> > +        t = p2m_ram_rw;
> 
> Potentially, you want to clean the cache here.

We have been talking about this and I have been looking through the
code. I am still not exactly sure how to proceed.

Is there a reason why cacheable reserved_memory pages should be treated
differently from normal memory, in regards to cleaning the cache? It
seems to me that they should be the same in terms of cache issues?

Is there a place where we clean the dcache for normal pages, one that is
not tied to p2m->clean_pte, which is different (it is there for iommu
reasons)?

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.