[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH v1 06/14] xen/arm: Add support for PCI ecam operations
Hi Stefano, > On 15 Sep 2021, at 12:06 am, Stefano Stabellini <sstabellini@xxxxxxxxxx> > wrote: > > On Tue, 14 Sep 2021, Rahul Singh wrote: >>>> + return NULL; >>>> + >>>> + busn -= cfg->busn_start; >>>> + base = cfg->win + (busn << cfg->ops->bus_shift); >>>> + >>>> + return base + (PCI_DEVFN(sbdf_t.dev, sbdf_t.fn) << devfn_shift) + >>>> where; >>>> +} >>> >>> I understand that the arm32 part is not implemented and not part of this >>> series, that's fine. However if the plan is that arm32 will dynamically >>> map each bus individually, then I imagine this function will have an >>> ioremap in the arm32 version. Which means that we also need an >>> unmap_bus call in struct pci_ops. I understand that pci_ecam_unmap_bus >>> would be a NOP today for arm64, but I think it makes sense to have it if >>> we want the API to be generic. >> >> As per my understanding we don’t need pci_ecam_unmap_bus(..) as I don’t see >> any use case to unmap the >> bus dynamically. We can add the support for per_bus_mapping for ARM32 once >> we implement arm32 part. >> Let me know your view on this. > > In the patch titled "xen/arm: PCI host bridge discovery within XEN on > ARM" there is the following in-code comment: > > * On 64-bit systems, we do a single ioremap for the whole config space > * since we have enough virtual address range available. On 32-bit, we > * ioremap the config space for each bus individually. > * > * As of now only 64-bit is supported 32-bit is not supported. > > > So I take it that on arm32 we don't have enough virtual address range > available, therefore we cannot ioremap the whole range. Instead, we'll > have to ioremap the config space of each bus individually. Yes you are right my understand is also same. > > I assumed that the idea was to call ioremap and iounmap dynamically, > otherwise the total amount of virtual address range required would still > be the same. As per my understanding for 32-bit we need per_bus mapping as we don’t have enough virtual address space in one chunk but we can have virtual address space in different chunk. I am not sure if we need to map/unmap the virtual address space for each read/write call. I just checked the Linux code[1] and there also mapping is done once not for each read/write call. [1] https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/drivers/pci/ecam.c?id=ea4aae05974334e9837d86ff1cb716bad36b3ca8#n76 > I also thought that the dynamic mapping function, the one > which would end up calling ioremap on arm32, would be pci_ecam_map_bus. > If so, then we are missing the corresponding unmapping function, the one > that would call iounmap on arm32 and do nothing on arm64, called before > returning from pci_generic_config_read and pci_generic_config_write. > > As always, I am not asking for the arm32 implementation, but if we are > introducing internal APIs, it would be good that they are consistent. > > > >>>> @@ -50,10 +51,41 @@ struct pci_host_bridge { >>>> u8 bus_start; /* Bus start of this bridge. */ >>>> u8 bus_end; /* Bus end of this bridge. */ >>>> void *sysdata; /* Pointer to the config space window*/ >>>> + const struct pci_ops *ops; >>>> }; >>>> >>>> +struct pci_ops { >>>> + void __iomem *(*map_bus)(struct pci_host_bridge *bridge, uint32_t >>>> sbdf, >>>> + uint32_t offset); >>>> + int (*read)(struct pci_host_bridge *bridge, uint32_t sbdf, >>>> + uint32_t reg, uint32_t len, uint32_t *value); >>>> + int (*write)(struct pci_host_bridge *bridge, uint32_t sbdf, >>>> + uint32_t reg, uint32_t len, uint32_t value); >>>> +}; >>>> + >>>> +/* >>>> + * struct to hold pci ops and bus shift of the config window >>>> + * for a PCI controller. >>>> + */ >>>> +struct pci_ecam_ops { >>>> + unsigned int bus_shift; >>>> + struct pci_ops pci_ops; >>>> + int (*init)(struct pci_config_window *); >>> >>> Is this last member of the struct needed/used? >> >> Yes It will be used by some vendor specific board( N1SDP ). Please check >> below. >> https://git.linaro.org/landing-teams/working/arm/n1sdp-pcie-quirk.git/commit/?id=04b7e76d0fe6481a803f58e54e008a1489d713a5 > > OK. I don't doubt that there might be bridge-specific initializations, > but we already have things like: > > DT_DEVICE_START(pci_gen, "PCI HOST ZYNQMP", DEVICE_PCI) > .dt_match = gen_pci_dt_match, > .init = gen_pci_dt_init, > DT_DEVICE_END > > The question is do we actually need two different vendor-specific init > functions? One is driven by device tree, e.g. ZynqMP is calling > gen_pci_dt_init. The other one is pci_ecam_ops->init, which is called > from the following chain: > > DT_DEVICE_START -> gen_pci_dt_init -> pci_host_common_probe -> > gen_pci_init -> pci_generic_ecam_ops.init > > What's the difference between gen_pci_dt_init and pci_ecam_ops.init in > terms of purpose? Yes I also agree with you we don’t need two init function. I will modify the code like below. DT_DEVICE_START ->pci_host_common_probe -> gen_pci_init -> pci_generic_ecam_ops.init Regards, Rahul > > I am happy to have pci_ecam_ops.init if it serves a different purpose > from DT_DEVICE_START.init. In that case we might want to add an in-code > comment so that an engineer would know where to add vendor-specific code > (whether to DT_DEVICE_START.init or to pci_ecam_ops.init).
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |