[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [RFC PATCH v1 1/4] arm/pci: PCI setup and PCI host bridge discovery within XEN on ARM.



On Mon, 27 Jul 2020, Roger Pau Monné wrote:
> On Sat, Jul 25, 2020 at 10:59:50AM +0100, Julien Grall wrote:
> > On Sat, 25 Jul 2020 at 00:46, Stefano Stabellini <sstabellini@xxxxxxxxxx> 
> > wrote:
> > >
> > > On Fri, 24 Jul 2020, Julien Grall wrote:
> > > > On Fri, 24 Jul 2020 at 19:32, Stefano Stabellini 
> > > > <sstabellini@xxxxxxxxxx> wrote:
> > > > > > If they are not equal, then I fail to see why it would be useful to 
> > > > > > have this
> > > > > > value in Xen.
> > > > >
> > > > > I think that's because the domain is actually more convenient to use
> > > > > because a segment can span multiple PCI host bridges. So my
> > > > > understanding is that a segment alone is not sufficient to identify a
> > > > > host bridge. From a software implementation point of view it would be
> > > > > better to use domains.
> > > >
> > > > AFAICT, this would be a matter of one check vs two checks in Xen :).
> > > > But... looking at Linux, they will also use domain == segment for ACPI
> > > > (see [1]). So, I think, they still have to use (domain, bus) to do the 
> > > > lookup.
> 
> You have to use the (segment, bus) tuple when doing a lookup because
> MMCFG regions on ACPI are defined for a segment and a bus range, you
> can have a MMCFG region that covers segment 0 bus [0, 20) and another
> MMCFG region that covers segment 0 bus [20, 255], and those will use
> different addresses in the MMIO space.

Thanks for the clarification!


> > > > > > In which case, we need to use PHYSDEVOP_pci_mmcfg_reserved so
> > > > > > Dom0 and Xen can synchronize on the segment number.
> > > > >
> > > > > I was hoping we could write down the assumption somewhere that for the
> > > > > cases we care about domain == segment, and error out if it is not the
> > > > > case.
> > > >
> > > > Given that we have only the domain in hand, how would you enforce that?
> > > >
> > > > >From this discussion, it also looks like there is a mismatch between 
> > > > >the
> > > > implementation and the understanding on QEMU devel. So I am a bit
> > > > concerned that this is not stable and may change in future Linux 
> > > > version.
> > > >
> > > > IOW, we are know tying Xen to Linux. So could we implement
> > > > PHYSDEVOP_pci_mmcfg_reserved *or* introduce a new property that
> > > > really represent the segment?
> > >
> > > I don't think we are tying Xen to Linux. Rob has already said that
> > > linux,pci-domain is basically a generic device tree property.
> > 
> > My concern is not so much the name of the property, but the definition of 
> > it.
> > 
> > AFAICT, from this thread there can be two interpretation:
> >       - domain == segment
> >       - domain == (segment, bus)
> 
> I think domain is just an alias for segment, the difference seems to
> be that when using DT all bridges get a different segment (or domain)
> number, and thus you will always end up starting numbering at bus 0
> for each bridge?
>
> Ideally you would need a way to specify the segment and start/end bus
> numbers of each bridge, if not you cannot match what ACPI does. Albeit
> it might be fine as long as the OS and Xen agree on the segments and
> bus numbers that belong to each bridge (and thus each ECAM region).

That is what I thought and it is why I was asking to clarify the naming
and/or writing a document to explain the assumptions, if any.

Then after Julien's email I followed up in the Linux codebase and
clearly there is a different assumption baked in the Linux kernel for
architectures that have CONFIG_PCI_DOMAINS enabled (including ARM64).

The assumption is that segment == domain == unique host bridge. It
looks like it is coming from IEEE Std 1275-1994 but I am not certain.
In fact, it seems that ACPI MCFG and IEEE Std 1275-1994 don't exactly
match. So I am starting to think that domain == segment for IEEE Std
1275-1994 compliant device tree based systems.

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.