[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 1/6] xen: extend XEN_DOMCTL_memory_mapping to handle cacheability



Hi,

On 22/04/2019 22:59, Stefano Stabellini wrote:
On Sun, 21 Apr 2019, Julien Grall wrote:
diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 30cfb01..5b8fcc5 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -1068,9 +1068,24 @@ int unmap_regions_p2mt(struct domain *d,
    int map_mmio_regions(struct domain *d,
                         gfn_t start_gfn,
                         unsigned long nr,
-                     mfn_t mfn)
+                     mfn_t mfn,
+                     uint32_t cache_policy)
    {
-    return p2m_insert_mapping(d, start_gfn, nr, mfn,
p2m_mmio_direct_dev);
+    p2m_type_t t;
+
+    switch ( cache_policy )
+    {
+    case CACHEABILITY_MEMORY:
+        t = p2m_ram_rw;

Potentially, you want to clean the cache here.

We have been talking about this and I have been looking through the
code. I am still not exactly sure how to proceed.

Is there a reason why cacheable reserved_memory pages should be treated
differently from normal memory, in regards to cleaning the cache? It
seems to me that they should be the same in terms of cache issues?

Your wording is a bit confusing. I guess what you call "normal memory" is
guest memory, am I right?

Yes, right. I wonder if we need to come up with clearer terms. Given the
many types of memory we have to deal with, it might become even more
confusing going forward. Guest normal memory maybe? Or guest RAM?

The term "normal memory" is really confusing because this is a memory type on Arm. reserved-regions are also not *MMIO* as they are part of the RAM that was reserved for special usage. So the term "guest RAM" is also not appropriate.

I understand that 'iomem' is a quick way to get reserved-memory regions mapped in the guest. However, this feels like an abuse of the interface because reserved-memory are technically not an MMIO. They also can be used by the OS for storing data when not in use (providing the DT node contain the property
'reusable').

Overall, we want to rethink how 'reserved-regions' are going to be treated. The solution suggested in this series is not going to be viable very long.



Any memory assigned to the guest is and clean & invalidate (technically clean
is enough) before getting assigned to the guest (see flush_page_to_ram). So
this patch is introducing a different behavior that what we currently have for
other normal memory.

This is what I was trying to understand, thanks for the pointer. I am
unsure whether we want to do this for reserved-memory regions too: on
one hand, it would make things more consistent, on the other hand I am
not sure it is the right behavior for reserved-memory. Let's think it
through.

The use case is communication with other heterogeneous CPUs. In that
case, it would matter if a domU crashes with the ring mapped and an
unflushed write (partial?) to the ring. The domU gets restarted with the
same ring mapping. In this case, it looks like we would want to clean
the cache. It wouldn't matter if it is done at VM shutdown or at VM
creation time.

So maybe it makes sense to do something like flush_page_to_ram for
reserved-memory pages. It seems simple to do it at VM creation time,
because we could invalidate the cache when map_mmio_regions is called,
either there or from the domctl handler. On the other hand, I don't know
where to do it at domain destruction time because no domctl is called to
unmap the reserved-memory region. Also, cleaning the cache at domain
destruction time would introduce a difference compared to guest normal
memory.

I know I said the opposite in our meeting, but maybe cleaning the cache
for reserved-memory regions at domain creation time is the right way
forward?

I don't have a strong opinion on it.

Cheers,

--
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.