[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v3 11/17] xen/arm: PCI host bridge discovery within XEN on ARM

  • To: Rahul Singh <rahul.singh@xxxxxxx>
  • From: Jan Beulich <jbeulich@xxxxxxxx>
  • Date: Wed, 29 Sep 2021 10:31:05 +0200
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com; dkim=pass header.d=suse.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version; bh=V7EBOjmP8zODTq02nDA9aoNua2xdPpk/NgTvxxufaQ4=; b=kjGSg3NvCdpZypGOy9X0umTdOJ944kBcXUxOlLlJYP3JDXLRxHRUs1xU18Mh9otXtK68wkh7wXVnKSkhXkR6OKZ53xlcECQni4rl/oUjC2mzljpN6lGbGARACS8v31MxSOyHUUFL+nrQtl+u8sXaE4dgAE+sd1kyqscgKQuWWzi7KxFe+KiQExrr8eQRh8wyE65DsVyFcMz1oc1G/+lB2r/mPO5zDhiKLsAowQM+FgfkEC0vfJ/iPfxTEXRk6+iA7Cllt3R2JZvZaOxHWV8H2Ead9fti+7c7CuWvwrbO+bwuP+z0oGiIujKQ0Y+2ldUXeueQfPuYqNsMO0pGoTjtwA==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=lqFkNVzTUAkHUdUn+O8GIo8iSE3HCwJ5rHXnIq05SsJZwoSx8LXb7IOXXgtKaEfTsEIzGsFCZAAQfcekcG5zl7TMmScmOg2hHr7bRbulQwg7JQk5W3NPSo48QKOnafxg1YFxbkTOZcaBHVlUdBkwgHyx+W+hzTvRcqenC75reCa6ZCEz8wN7FK36fymfKDsFxv4QCb1WixoNMTJQYKTk0lJ/PEMe6n/bEbtCNhaX1VBI4GNf9YmImY0fmvX1qr0jreKJyHDDydGdqsLr0a+DAdG+WW/smstzYp37q6DGjCY5Xz+Blfb0t0tTk4UxYU0zcx2r4gvH1XCsDu4KehCVMA==
  • Authentication-results: lists.xenproject.org; dkim=none (message not signed) header.d=none;lists.xenproject.org; dmarc=none action=none header.from=suse.com;
  • Cc: bertrand.marquis@xxxxxxx, Andre.Przywara@xxxxxxx, Stefano Stabellini <sstabellini@xxxxxxxxxx>, Julien Grall <julien@xxxxxxx>, Volodymyr Babchuk <Volodymyr_Babchuk@xxxxxxxx>, Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, George Dunlap <george.dunlap@xxxxxxxxxx>, Ian Jackson <iwj@xxxxxxxxxxxxxx>, Wei Liu <wl@xxxxxxx>, xen-devel@xxxxxxxxxxxxxxxxxxxx
  • Delivery-date: Wed, 29 Sep 2021 08:31:26 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On 28.09.2021 20:18, Rahul Singh wrote:
> XEN during boot will read the PCI device tree node “reg” property
> and will map the PCI config space to the XEN memory.
> As of now only "pci-host-ecam-generic" compatible board is supported.
> "linux,pci-domain" device tree property assigns a fixed PCI domain
> number to a host bridge, otherwise an unstable (across boots) unique
> number will be assigned by Linux. XEN access the PCI devices based on
> Segment:Bus:Device:Function. A Segment number in the XEN is same as a
> domain number in Linux. Segment number and domain number has to be in
> sync to access the correct PCI devices.
> XEN will read the “linux,pci-domain” property from the device tree node
> and configure the host bridge segment number accordingly. If this
> property is not available XEN will allocate the unique segment number
> to the host bridge.
> Signed-off-by: Rahul Singh <rahul.singh@xxxxxxx>
> ---
> Change in v3:
> - Modify commit msg based on received comments.
> - Remove added struct match_table{} struct in struct device{}
> - Replace uint32_t sbdf to pci_sbdf_t sbdf to avoid typecast
> - Remove bus_start,bus_end and void *sysdata from struct pci_host_bridge{}
> - Move "#include <asm/pci.h>" in "xen/pci.h" after pci_sbdf_t sbdf declaration

This part, if not split into a separate patch in the first place, wants
mentioning in at least half a sentence of the description. Then ...

> --- a/xen/include/xen/pci.h
> +++ b/xen/include/xen/pci.h
> @@ -15,7 +15,6 @@
>  #include <xen/pfn.h>
>  #include <asm/device.h>
>  #include <asm/numa.h>
> -#include <asm/pci.h>
>  /*
>   * The PCI interface treats multi-function devices as independent
> @@ -62,6 +61,8 @@ typedef union {
>      };
>  } pci_sbdf_t;
> +#include <asm/pci.h>
> +
>  struct pci_dev_info {
>      /*
>       * VF's 'is_extfn' field is used to indicate whether its PF is an 
> extended

.. this part
Acked-by: Jan Beulich <jbeulich@xxxxxxxx>
(also if you decide to move this to a separate patch)




Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.