[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2 1/5] hvmloader: Correct bug in low mmio region accounting



On 20/06/13 11:40, Stefano Stabellini wrote:
On Thu, 20 Jun 2013, George Dunlap wrote:
On 19/06/13 18:18, Stefano Stabellini wrote:
On Tue, 18 Jun 2013, George Dunlap wrote:
When deciding whether to map a device in low MMIO space (<4GiB),
hvmloader compares it with "mmio_left", which is set to the size of
the low MMIO range (pci_mem_end - pci_mem_start).  However, even if it
does map a device in high MMIO space, it still removes the size of its
BAR from mmio_left.

This patch first changes the name of this variable to "low_mmio_left"
to distinguish it from generic MMIO, and corrects the logic to only
subtract the size of the BAR for devices maped in the low MMIO region.

Also make low_mmio_left unsigned, and don't allow it to go negative.
Since its main use is to be compared to a 64-bit unsigned int, this
may have undefined (and in practice almost certainly incorrect)
results.  Not subtracting is OK because if there's not enough room, it
won't actually be mapped.

Signed-off-by: George Dunlap <george.dunlap@xxxxxxxxxxxxx>
CC: Ian Jackson <ian.jackson@xxxxxxxxxx>
CC: Ian Campbell <ian.campbell@xxxxxxxxxx>
CC: Stefano Stabellini <stefano.stabellini@xxxxxxxxxx>
CC: Hanweidong <hanweidong@xxxxxxxxxx>
---
   tools/firmware/hvmloader/pci.c |   10 +++++-----
   1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/tools/firmware/hvmloader/pci.c
b/tools/firmware/hvmloader/pci.c
index c78d4d3..8691a19 100644
--- a/tools/firmware/hvmloader/pci.c
+++ b/tools/firmware/hvmloader/pci.c
@@ -38,11 +38,10 @@ void pci_setup(void)
   {
       uint8_t is_64bar, using_64bar, bar64_relocate = 0;
       uint32_t devfn, bar_reg, cmd, bar_data, bar_data_upper;
-    uint64_t base, bar_sz, bar_sz_upper, mmio_total = 0;
+    uint64_t base, bar_sz, bar_sz_upper, low_mmio_left, mmio_total = 0;
       uint32_t vga_devfn = 256;
       uint16_t class, vendor_id, device_id;
       unsigned int bar, pin, link, isa_irq;
-    int64_t mmio_left;
         /* Resources assignable to PCI devices via BARs. */
       struct resource {
@@ -244,7 +243,7 @@ void pci_setup(void)
       io_resource.base = 0xc000;
       io_resource.max = 0x10000;
   -    mmio_left = pci_mem_end - pci_mem_start;
+    low_mmio_left = pci_mem_end - pci_mem_start;
         /* Assign iomem and ioport resources in descending order of size.
*/
       for ( i = 0; i < nr_bars; i++ )
@@ -253,7 +252,7 @@ void pci_setup(void)
           bar_reg = bars[i].bar_reg;
           bar_sz  = bars[i].bar_sz;
   -        using_64bar = bars[i].is_64bar && bar64_relocate && (mmio_left
< bar_sz);
+        using_64bar = bars[i].is_64bar && bar64_relocate &&
(low_mmio_left < bar_sz);
           bar_data = pci_readl(devfn, bar_reg);
             if ( (bar_data & PCI_BASE_ADDRESS_SPACE) ==
@@ -273,9 +272,10 @@ void pci_setup(void)
               }
               else {
                   resource = &mem_resource;
+                if ( bar_sz <= low_mmio_left )
+                    low_mmio_left -= bar_sz;
Why do you need this check? Isn't the above if(using_64bar && (bar_sz >
PCI_MIN_BIG_BAR_SIZE)) enough?
This is in the lowmem region.  There may be regions which can't be relocated
to the high PCI region that nevertheless don't fit in the low PCI region.  If
it doesn't fit, it will hit the "no space for resource" conditional below and
not be mapped; we need to make sure not to subtract it off.

I suppose a more robust method might be to use resource->max - resource->base
instead of keeping a separate accounting... I had originally thought that
would be too invasive a change, but I'm not so sure now... any thoughts?
You could just add:

if (resource == &mem_resource)
     low_mmio_left -= bar_sz;

right below the resource size check. This way we would have only one
check to see if the bar fits.

Actually I just changed v3 to rid of low_mmio_left altogether, and just use "mem_resource.max - mem_resource.base" for the one and only time the value is needed.

 -George



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.