[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [v7][PATCH 06/16] hvmloader/pci: skip reserved ranges



On 2015/7/15 16:34, Jan Beulich wrote:
On 15.07.15 at 06:27, <tiejun.chen@xxxxxxxxx> wrote:
Furthermore, could we have this solution as follows?

Yet more special casing code you want to add. I said no to this
model, and unless you can address the issue _without_ adding
a lot of special casing code, the answer will remain no (subject

What about this?

@@ -301,6 +301,19 @@ void pci_setup(void)
             pci_mem_start <<= 1;
     }

+    for ( i = 0; i < memory_map.nr_map ; i++ )
+    {
+        uint64_t reserved_start, reserved_size;
+        reserved_start = memory_map.map[i].addr;
+        reserved_size = memory_map.map[i].size;
+        if ( check_overlap(pci_mem_start, pci_mem_end - pci_mem_start,
+                           reserved_start, reserved_size) )
+        {
+ printf("Reserved device memory conflicts current PCI memory.\n");
+            BUG();
+        }
+    }
+
     if ( mmio_total > (pci_mem_end - pci_mem_start) )
     {
         printf("Low MMIO hole not large enough for all devices,"

This is very similar to our current policy to [RESERVED_MEMORY_DYNAMIC_START, RESERVED_MEMORY_DYNAMIC_END] in patch #6 since actually this is also another rare possibility in real world. Even I can do this as well when we handle that conflict with [RESERVED_MEMORY_DYNAMIC_START, RESERVED_MEMORY_DYNAMIC_END] in patch #6.

Note its not necessary to concern high memory since we already handle this case in the hv code previously, and its also not affected by those relocated memory later since our previous policy can make sure RAM isn't overlapping with RDM.

Thanks
Tiejun

to co-maintainers overriding me).

Jan




_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.