[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 1/6] xen: extend XEN_DOMCTL_memory_mapping to handle cacheability



Hi Stefano,

On 4/20/19 1:02 AM, Stefano Stabellini wrote:
On Tue, 26 Feb 2019, Julien Grall wrote:
Hi,

On 26/02/2019 23:07, Stefano Stabellini wrote:
Reuse the existing padding field to pass cacheability information about
the memory mapping, specifically, whether the memory should be mapped as
normal memory or as device memory (this is what we have today).

Add a cacheability parameter to map_mmio_regions. 0 means device
memory, which is what we have today.

On ARM, map device memory as p2m_mmio_direct_dev (as it is already done
today) and normal memory as p2m_ram_rw.

On x86, return error if the cacheability requested is not device memory.

Signed-off-by: Stefano Stabellini <stefanos@xxxxxxxxxx>
CC: JBeulich@xxxxxxxx
CC: andrew.cooper3@xxxxxxxxxx
---
   xen/arch/arm/gic-v2.c            |  3 ++-
   xen/arch/arm/p2m.c               | 19 +++++++++++++++++--
   xen/arch/arm/platforms/exynos5.c |  4 ++--
   xen/arch/arm/platforms/omap5.c   |  8 ++++----
   xen/arch/arm/vgic-v2.c           |  2 +-
   xen/arch/arm/vgic/vgic-v2.c      |  2 +-
   xen/arch/x86/hvm/dom0_build.c    |  7 +++++--
   xen/arch/x86/mm/p2m.c            |  6 +++++-
   xen/common/domctl.c              |  8 +++++---
   xen/drivers/vpci/header.c        |  3 ++-
   xen/include/public/domctl.h      |  4 +++-
   xen/include/xen/p2m-common.h     |  3 ++-
   12 files changed, 49 insertions(+), 20 deletions(-)

diff --git a/xen/arch/arm/gic-v2.c b/xen/arch/arm/gic-v2.c
index e7eb01f..1ea3da2 100644
--- a/xen/arch/arm/gic-v2.c
+++ b/xen/arch/arm/gic-v2.c
@@ -690,7 +690,8 @@ static int gicv2_map_hwdown_extra_mappings(struct domain *d)
ret = map_mmio_regions(d, gaddr_to_gfn(v2m_data->addr),
                                  PFN_UP(v2m_data->size),
-                               maddr_to_mfn(v2m_data->addr));
+                               maddr_to_mfn(v2m_data->addr),
+                               CACHEABILITY_DEVMEM);
           if ( ret )
           {
               printk(XENLOG_ERR "GICv2: Map v2m frame to d%d failed.\n",
diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 30cfb01..5b8fcc5 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -1068,9 +1068,24 @@ int unmap_regions_p2mt(struct domain *d,
   int map_mmio_regions(struct domain *d,
                        gfn_t start_gfn,
                        unsigned long nr,
-                     mfn_t mfn)
+                     mfn_t mfn,
+                     uint32_t cache_policy)
   {
-    return p2m_insert_mapping(d, start_gfn, nr, mfn, p2m_mmio_direct_dev);
+    p2m_type_t t;
+
+    switch ( cache_policy )
+    {
+    case CACHEABILITY_MEMORY:
+        t = p2m_ram_rw;

Potentially, you want to clean the cache here.

We have been talking about this and I have been looking through the
code. I am still not exactly sure how to proceed.

Is there a reason why cacheable reserved_memory pages should be treated
differently from normal memory, in regards to cleaning the cache? It
seems to me that they should be the same in terms of cache issues?

Your wording is a bit confusing. I guess what you call "normal memory" is guest memory, am I right?

Any memory assigned to the guest is and clean & invalidate (technically clean is enough) before getting assigned to the guest (see flush_page_to_ram). So this patch is introducing a different behavior that what we currently have for other normal memory.

But my concern is you may inconsistently use the memory attributes breaking coherency. For instance, you map in Guest A with cacheable attributes then after the guest died, you remap to guest B with a non-cacheable attributes. guest B may have an inconsistent view of the memory mapped.

This is one case where cleaning the cache would be necessary. One could consider this is part of the "device reset" (this is a bit of the name abuse), so Xen should not take care of it.

The most important bit is to have documentation that reflect the issues with such parameters. So the user is aware of what could go wrong when using "iomem".


Is there a place where we clean the dcache for normal pages, one that is
not tied to p2m->clean_pte, which is different (it is there for iommu
reasons)?

p2m->clean_pte is only here to deal with non-coherence IOMMU page-table walker. See above for flushing normal pages.

Cheers,

--
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.