[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2 3/9] xen/mm: move modify_identity_mmio to global file and drop __init



Hi Roger,

On 25/04/2017 09:01, Roger Pau Monne wrote:
On Mon, Apr 24, 2017 at 03:42:08PM +0100, Julien Grall wrote:
On 20/04/17 16:17, Roger Pau Monne wrote:
 /* Populate a HVM memory range using the biggest possible order. */
diff --git a/xen/common/memory.c b/xen/common/memory.c
index 52879e7438..0d970482cb 100644
--- a/xen/common/memory.c
+++ b/xen/common/memory.c
@@ -1438,6 +1438,40 @@ int prepare_ring_for_helper(
     return 0;
 }

+int modify_mmio(struct domain *d, unsigned long gfn, unsigned long pfn,

Whilst you introduce this new function, please use mfn_t and gfn_t.

Also s/pfn/mfn/

Done.

+                unsigned long nr_pages, const bool map)
+{
+    int rc;
+
+    /*
+     * Make sure this function is only used by the hardware domain, because it
+     * can take an arbitrary long time, and could DoS the whole system.
+     */
+    ASSERT(is_hardware_domain(d));

What would be the plan for guest if we decide to use vpci?

One option would be to not allow the DomU to relocate it's BARs and ignore
writes to the 2nd bit of the command register (PCI_COMMAND_MEMORY), thus always
having the BARs mapped. The other is to somehow allow VMExit (and the ARM
equivalent) continuation (something similar to what we do with hypercalls).

My understanding is BARs may be allocated by the kernel because the firmware didn't do it. This is the current case on ARM (and I guess x86) where Linux will always go through the BARs.

So if you do the first option, who would decide the position of the BARs?

For the second option, we can take advantage of superpage (4K, 2M, 1G) mapping on ARM, so the number of actual mapping would be really limited.

Also, we are looking at MMIO continuation for ARM for other part of the hypervisor. We might be able to leverage that for this function.


+
+    for ( ; ; )
+    {
+        rc = (map ? map_mmio_regions : unmap_mmio_regions)

On ARM, map_mmio_regions and unmap_mmio_regions will map the MMIO with very
strict attribute. I think we would need an extra argument to know the wanted
memory attribute (maybe p2m_type_t?).

I'm not sure I can do anything regarding this ATM. Sorry for my ignorance, but
map_mmio_regions on ARM maps the region as p2m_mmio_direct_dev, and according
to the comment that's "Read/write mapping of genuine Device MMIO area", which
fits exactly into my usage (I'm using it to map BARs).

We have few p2m_mmio_direct_* p2m_type because the architecture allows us to have fine grain memory attribute.

The p2m type p2m_mmio_direct_dev is very restrictive (unaligned access forbid, non-cacheable, non-gatherable). This should be used for MMIO region that have side-effects and will affect performances.

We use this one by default as it will restrict the memory attribute used by the guest. However, this will be an issue for at least cacheable BARs. We had similar issue recently on ARM with SRAM device as driver may do unaligned access and cacheable one.

For DOM0 we are using p2m_mmio_direct_c and rely on the OS to restrict the memory attribute when necessary. We cannot do that for guest as this may have some security implications.

So for the guest we will do on the case by case basis. For instance we you map BAR, you know the kind and can decide of a proper memory attribute.

Cheers,

--
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.