[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH RESEND 1/3] OvmfPkg/XenSupport: remove usage of prefetchable PCI host bridge aperture



On Fri, Mar 22, 2019 at 10:06:45AM +0100, Laszlo Ersek wrote:
> On 03/22/19 09:33, Roger Pau Monné wrote:
> > On Wed, Mar 06, 2019 at 12:40:54PM +0000, Igor Druzhinin wrote:
> >> This aperture doesn't exist in OVMF and trying to use it causes
> >> failing assertions later in cases there are prefetchable and
> >> non-prefetchable BARs following each other. This configuration is
> >> quite likely with some PCI passthrough devices.
> > 
> > According to the PCIe spec, it's fine to place prefetchable BARs in
> > non-prefetchable memory space. There's a note that says that most
> > implementations will only have 1G of non-prefetchable memory, and
> > that most scalable platforms will map 32bit BARs into
> > non-prefetchable memory regardless of the prefetchable bit value.
> > 
> > Shouldn't OVMF be fine with finding both prefetchable and
> > non-prefetchable BARs, as long as the memory region is set to
> > non-prefetchable?
> > 
> > Does OVMF have the capability to position BARs by itself? If so we
> > could skip of the placement done by hvmloader and just let OVMF
> > position things where it see fit.
> 
> The core PciBusDxe driver that is built into OVMF certainly does the
> resource allocation/placement, but when OVMF is executed on Xen, this
> functionality of PciBusDxe is dynamically disabled by setting
> PcdPciDisableBusEnumeration to TRUE. (I'm not saying this is right vs.
> wrong, just that it happens.)
> 
> Note that OVMF itself checks PcdPciDisableBusEnumeration for many things
> (just grep OvmfPkg to see), so if we were to flip the PCD while running
> on Xen, it would change the behavior of OVMF on Xen in a number of
> areas. Can't offer a deeper treatise for now; all the related source
> code locations would have to be audited (likely with "git blame" too).
> 
> Now, if PciBusDxe *is* allowed/requested to lay out the BARs, through
> the PCD, then it (indirectly) depends on platform code to provide the
> resource apertures -- of the root bridges -- out of which it can
> allocate the BARs. My understanding is that XenSupport.c tries to detect
> these apertures "retroactively", from the pre-existing BAR placements.
> This was contributed by Ray in commit 49effaf26ec9
> ("OvmfPkg/PciHostBridgeLib: Scan for root bridges when running over
> Xen", 2016-05-11), so I'll have to defer to him on the code.
> 
> I believe that, if we flipped the PCD to FALSE on Xen, and hvmloader
> would stop pre-configuring the BARs (or OVMF would simply ignore that
> pre-config), then this code (XenSupport.c) should be possible to
> eliminate -- *however*, in that case, some other Xen-specific code would
> become necessary, to expose the root bridge resource apertures (out of
> which BARs should be allocated by PciBusDxe, see above).
> 
> In QEMU's case: all root bridges share the same apertures between each
> other (given any specific resource type). They are communicated via
> dynamic PCDs. The 32-bit MMIO aperture PCDs are set in PlatformPei
> somewhat simply (based on QEMU machine type, IIRC). The 64-bit MMIO
> aperture PCDs are also calculated in PlatformPei, but that calculation
> is a *lot* more complex.
> 
> All in all, the "root" information is the set of apertures, i.e. what
> parts of the GPA space can be used for what resource allocation.

Thanks for the detailed explanation. IMO it would be better to let
OVMF do the BAR placement instead of having to do it in hvmloader,
this just causes code duplication between projects and there's nothing
Xen-specific about the PCI resource allocation.

I will try to find some time to look into this, albeit I'm not going
to be able to work in this immediately. I'm more than happy if someone
else has spare time and wants to pick this up.

Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.