[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] RFC: [PATCH 1/3] Enhance platform support for PCI



On Mon, 2015-02-23 at 15:27 +0000, Jan Beulich wrote:
> >>> On 23.02.15 at 16:02, <ian.campbell@xxxxxxxxxx> wrote:
> > On Mon, 2015-02-23 at 14:45 +0000, Jan Beulich wrote:
> >> In which case the Dom0 OS doing so would need to communicate
> >> its decisions to the hypervisor, as you suggest further down.
> > 
> > So more concretely something like:
> >         #define PHYSDEVOP_pci_host_bridge_add <XX>
> >         struct physdev_pci_host_bridge_add {
> >             /* IN */
> >             uint16_t seg;
> >             uint8_t bus;
> >             uint64_t address;
> >         };
> >         typedef struct physdev_pci_host_bridge_add 
> > physdev_pci_host_bridge_add_t;
> >         DEFINE_XEN_GUEST_HANDLE(physdev_pci_host_bridge_add_t);
> > 
> > Where seg+bus are enumerated/assigned by dom0 and address is some unique
> > property of the host bridge -- most likely its pci cfg space base
> > address (which is what physdev_pci_mmcfg_reserved also takes, I think?)
> 
> Right.
> 
> > Do you think we would need start_bus + end_bus here? Xen could enumerate
> > this itself I think, and perhaps should even if dom0 tells us something?
> 
> That depends - if what you get presented here by Dom0 is a PCI
> device at <seg>:<bus>:00.0, and if all other setup was already
> done on it, then you could read the secondary and subordinate
> bus numbers from its config space. If that's not possible, then
> Dom0 handing you these values would seem to be necessary.
> 
> As a result you may also need a hook from PCI device registration,
> allowing to associate it with the right host bridge (and refusing to
> add any for which there's none).

Right.

My thinking was that PHYSDEVOP_pci_host_bridge_add would add an entry
into some mapping data structure from (segment,bus) to a handle
associated with the associated pci host bridge driver in Xen.

PHYSDEVOP_manage_pci_add would have to lookup the host bridge driver
from the (segment,bus) I think to construct the necessary linkage for
use later when we try to do things to the device, and it should indeed
fail if it can't find one.

> As an alternative, extending PHYSDEVOP_manage_pci_add_ext in
> a suitable manner may be worth considering, provided (like on x86
> and ia64) the host bridges get surfaced as distinct PCI devices.
> 
> >> This
> >> basically replaces the bus scan (on segment 0) that Xen does on
> >> x86 (which topology information gets derived from).
> > 
> > Is the reason for the scan being of segment 0 only is that it is the one
> > which lives at the legacy PCI CFG addresses (or those magic I/O ports)? 
> 
> Right - ideally we would scan all segments, but we need Dom0 to
> tell us which MMCFG regions are safe to access,

Is this done via PHYSDEVOP_pci_mmcfg_reserved?

>  and hence can't
> do that scan at boot time. But we also won't get away without
> scanning, as we need to set up the IOMMU(s) to at least cover
> the devices used for booting the system.

Which hopefully are all segment 0 or aren't needed until after dom0
tells Xen about them I suppose.

> > What about other host bridges in segment 0 which aren't at that address?
> 
> At which address?

I meant this to be a back reference to "the legacy PCI CFG addresses (or
those magic I/O ports)".

>  (All devices on segment zero are supposed to
> be accessible via config space access method 1.)

Is that "the legacy ....  or magic ..." again?

> > You could do the others based on MMCFG tables if you wanted, right?
> 
> Yes, with the above mentioned caveat.
> 
> Jan
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxx
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.