[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [RFC PATCH v1 1/4] arm/pci: PCI setup and PCI host bridge discovery within XEN on ARM.





On 24/07/2020 16:29, Roger Pau Monné wrote:
On Fri, Jul 24, 2020 at 04:15:47PM +0100, Julien Grall wrote:


On 24/07/2020 15:44, Roger Pau Monné wrote:
diff --git a/xen/arch/arm/pci/Makefile b/xen/arch/arm/pci/Makefile
new file mode 100644
index 0000000000..358508b787
--- /dev/null
+++ b/xen/arch/arm/pci/Makefile
@@ -0,0 +1,4 @@
+obj-y += pci.o
+obj-y += pci-host-generic.o
+obj-y += pci-host-common.o
+obj-y += pci-access.o

The Kconfig option mentions the support being explicitly for ARM64,
would it be better to place the code in arch/arm/arm64 then?
I don't believe any of the code in this series is very arm64 specific. I
guess it was just only tested on arm64. So I would rather keep that under
arm/pci.

Ack. Could the Kconfg be adjusted to not depend on ARM_64? Just
stating it's only been tested on Arm64 would be enough IMO.

We already have an option to select PCI (see CONFIG_HAS_PCI). So I would prefer if we reuse it (possible renamed to CONFIG_PCI) rather than inventing one for Arm specific.

Regarding the dependency, it will depend if it is possible to make it build easily on Arm32. If not, then we will need to keep the ARM_64.


+
+    struct pci_host_bridge *bridge = pci_find_host_bridge(sbdf.seg, sbdf.bus);
+
+    if ( unlikely(!bridge) )
+    {
+        printk(XENLOG_ERR "Unable to find bridge for "PRI_pci"\n",
+                sbdf.seg, sbdf.bus, sbdf.dev, sbdf.fn);

I had a patch to add a custom modifier to out printf format in
order to handle pci_sbdf_t natively:

https://patchew.org/Xen/20190822065132.48200-1-roger.pau@xxxxxxxxxx/

It missed maintainers Acks and was never committed. Since you are
doing a bunch of work here, and likely adding a lot of SBDF related
prints, feel free to import the modifier (%pp) and use in your code
(do not attempt to switch existing users, or it's likely to get
stuck again).

I forgot about this patch :/. It would be good to revive it. Which acks are
you missing?

I only had an Ack from Jan, so I was missing Intel and AMD Acks, which
would now only be Intel since AMD has been absorbed by the x86
maintainers.

Ok. So, it should be easier to get it acked now :).


[...]

+static bool __init dt_pci_parse_bus_range(struct dt_device_node *dev,
+        struct pci_config_window *cfg)
+{
+    const __be32 *cells;

It's my impression that while based on Linux this is not a verbatim
copy of a Linux file, and tries to adhere with the Xen coding style.
If so please use uint32_t here.

uint32_t would be incorrect because this is a 32-bit value always in big
endian. I don't think we have other typedef to show it is a 32-bit BE value,
so __be32 is the best choice.

Oh, OK, so this is done to explicitly denote the endianness of a value
on the type itself.

That's correct. On Linux, they use sparse to then check the BE and LE fields are not mixed together. We don't have that on Xen, but at least this makes more obvious what we are using.


[...]

+
+    if ( acpi_disabled )
+        dt_pci_init();
+    else
+        acpi_pci_init();

Isn't there an enum or something that tells you whether the system
description is coming from ACPI or from DT?

This if .. else seems fragile.


This is the common way we do it on Arm.... I would welcome any improvement,
but I don't think this should be part of this work.

Ack. In any case I think for ACPI PCI init will get called by
acpi_mmcfg_init as part of acpi_boot_init, so I'm not sure there's
much point in having something about ACPI added here, as it seems this
will be DT only?

acpi_boot_init() does not exist on Arm. Looking at x86, acpi_mmcfg_init() is not event called from that function.

In general, I would prefer if each subsystem takes care of initialization itself. This makes easier to figure out the difference between ACPI and DT. FWIW, this is inline with majority of the Arm code.

Cheers,

--
Julien Grall



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.