[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [v7][PATCH 06/16] hvmloader/pci: skip reserved ranges



On 2015/7/15 8:55, Chen, Tiejun wrote:
I agree we'd better overhaul this since we already found
something unreasonable here. But one or two weeks is really not
enough to fix this with a bitmap framework, and although a bitmap
can make mmio allocation better, but more complicated if we just
want to allocate PCI mmio.

So could we do this next? I just feel if you'd like to pay more
time help me refine our current solution, its relatively
realistic to this case :) And then we can go into bitmap in
details or work out a better solution in sufficient time slot.

Looking at how long it took to get here (wasn't this series
originally even meant to go into 4.5?) and how much time I already
spent

Certainly appreciate your time.

I didn't mean its wasting time at this point. I just want to express
 that its hard to implement that solution in one or two weeks to
walking into 4.6 as an exception.

Note I know this feature is still not accepted as an exception to 4.6
 right now so I'm making an assumption.

reviewing all the previous versions, I don't see a point in
wasting even more time on working out details of an approach that's
getting too complex/ugly already anyway.

Here I'm trying to seek such a kind of two-steps approach if
possible.


Furthermore, could we have this solution as follows?

@@ -407,8 +408,29 @@ void pci_setup(void)
         }

         base = (resource->base  + bar_sz - 1) & ~(uint64_t)(bar_sz - 1);
+ reallocate_bar:
         bar_data |= (uint32_t)base;
         bar_data_upper = (uint32_t)(base >> 32);
+        /*
+         * We need to skip those reserved regions like RMRR but this
pobably
+         * lead the remaing allocation failed.
+         */
+        for ( j = 0; j < memory_map.nr_map ; j++ )
+        {
+            if ( memory_map.map[j].type != E820_RAM )
+            {
+                reserved_end = memory_map.map[j].addr +
memory_map.map[j].size;
+                if ( check_overlap(base, bar_sz,
+                                   memory_map.map[j].addr,
+                                   memory_map.map[j].size) )
+                {
+                    if ( !mmio_conflict )
+                        mmio_conflict = true;
+                    base = (reserved_end + bar_sz - 1) &
~(uint64_t)(bar_sz - 1);
+                    goto reallocate_bar;
+                }
+            }
+        }
         base += bar_sz;

         if ( (base < resource->base) || (base > resource->max) )
@@ -416,6 +438,11 @@ void pci_setup(void)
             printf("pci dev %02x:%x bar %02x size "PRIllx": no space for "
                    "resource!\n", devfn>>3, devfn&7, bar_reg,
                    PRIllx_arg(bar_sz));
+            if ( mmio_conflict )
+            {
+                printf("MMIO conflicts with RDM.\n");
+                BUG();
+            }
             continue;
         }

Here we still check RDMs and skip them if any conflicts exist, but at
last, if 1) no sufficient space to allocate all bars to this VM && 2)
this insufficient situation is triggered by RDM, we stop creating this VM.

This may change slightly current mechanism and also guarantee we don't ignore any conflicts. But I admit this is not a good solution because that alignment issue still exists but I think it may be accepted if all PCI devices specific to this VM can work. And in real world most RDMs just fall into somewhere but < 0xF0000000 (current default pci memory start address), so its not a common case to see this kind of conflict.

Thanks
Tiejun

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.