[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] PCI Passthrough ARM Design : Draft1



Please give us a chance to respond, it's been only just over a day and
we are all busy with lots of different things.

On Tue, 2015-06-09 at 14:42 +0000, Jaggi, Manish wrote:
> Hi Ian/Stefano,
> As discussed in the call I have sent the design.
> Didn't got any feedback on this.
> 
> Regards,
> Manish Jaggi
> 
> ________________________________________
> From: xen-devel-bounces@xxxxxxxxxxxxx <xen-devel-bounces@xxxxxxxxxxxxx> on 
> behalf of Manish Jaggi <mjaggi@xxxxxxxxxxxxxxxxxx>
> Sent: Monday, June 8, 2015 12:52:55 AM
> To: xen-devel@xxxxxxxxxxxxx; Ian Campbell; Stefano Stabellini; Vijay Kilari; 
> Kulkarni, Ganapatrao; Kumar, Vijaya; Kapoor, Prasun
> Subject: [Xen-devel] PCI Passthrough ARM Design : Draft1
> 
> PCI Pass-through in Xen ARM
> --------------------------
> 
> Index
> 1. Background
> 2. Basic PCI Support in Xen ARM
> 2.1 pci_hostbridge and pci_hostbridge_ops
> 2.2 PHYSDEVOP_pci_host_bridge_add hypercall
> 3. Dom0 Access PCI devices
> 4. DomU assignment of PCI device
> 5. NUMA and PCI passthrough
> 6. DomU pci device attach flow
> 
> 1. Background of PCI passthrough
> --------------------------------
> Passthrough refers to assigning a pci device to a guest domain (domU)
> such that
> the guest has full control over the device.The MMIO space and interrupts
> are
> managed by the guest itself, close to how a bare kernel manages a device.
> 
> Device's access to guest address space needs to be isolated and
> protected. SMMU
> (System MMU - IOMMU in ARM) is programmed by xen hypervisor to allow device
> access guest memory for data transfer and sending MSI/X interrupts. In
> case of
> MSI/X  the device writes to GITS (ITS address space) Interrupt Translation
> Register.
> 
> 2. Basic PCI Support for ARM
> ----------------------------
> The apis to read write from pci configuration space are based on
> segment:bdf.
> How the sbdf is mapped to a physical address is under the realm of the pci
> host controller.
> 
> ARM PCI support in Xen, introduces pci host controller similar to what
> exists
> in Linux. Each drivers registers callbacks, which are invoked on
> matching the
> compatible property in pci device tree node.
> 
> 2.1:
> The init function in the pci host driver calls to register hostbridge
> callbacks:
> int pci_hostbridge_register(pci_hostbridge_t *pcihb);
> 
> struct pci_hostbridge_ops {
>      u32 (*pci_conf_read)(struct pci_hostbridge*, u32 bus, u32 devfn,
>                                  u32 reg, u32 bytes);
>      void (*pci_conf_write)(struct pci_hostbridge*, u32 bus, u32 devfn,
>                                  u32 reg, u32 bytes, u32 val);
> };
> 
> struct pci_hostbridge{
>      u32 segno;
>      paddr_t cfg_base;
>      paddr_t cfg_size;
>      struct dt_device_node *dt_node;
>      struct pci_hostbridge_ops ops;
>      struct list_head list;
> };
> 
> A pci conf read function would internally be as follows:
> u32 pcihb_conf_read(u32 seg, u32 bus, u32 devfn,u32 reg, u32 bytes)
> {
>      pci_hostbridge_t *pcihb;
>      list_for_each_entry(pcihb, &pci_hostbridge_list, list)
>      {
>          if(pcihb->segno == seg)
>              return pcihb->ops.pci_conf_read(pcihb, bus, devfn, reg, bytes);
>      }
>      return -1;
> }
> 
> 2.2 PHYSDEVOP_pci_host_bridge_add hypercall
> 
> Xen code accesses PCI configuration space based on the sbdf received
> from the
> guest. The order in which the pci device tree node appear may not be the
> same
> order of device enumeration in dom0. Thus there needs to be a mechanism
> to bind
> the segment number assigned by dom0 to the pci host controller. The
> hypercall
> is introduced:
> 
> #define PHYSDEVOP_pci_host_bridge_add    44
> struct physdev_pci_host_bridge_add {
>      /* IN */
>      uint16_t seg;
>      uint64_t cfg_base;
>      uint64_t cfg_size;
> };
> 
> This hypercall is invoked before dom0 invokes the PHYSDEVOP_pci_device_add
> hypercall. The handler code invokes to update segment number in
> pci_hostbridge:
> 
> int pci_hostbridge_setup(uint32_t segno, uint64_t cfg_base, uint64_t
> cfg_size);
> 
> Subsequent calls to pci_conf_read/write are completed by the
> pci_hostbridge_ops
> of the respective pci_hostbridge.
> 
> 3. Dom0 access PCI device
> ---------------------------------
> As per the design of xen hypervisor, dom0 enumerates the PCI devices.
> For each
> device the MMIO space has to be mapped in the Stage2 translation for
> dom0. For
> dom0 xen maps the ranges in pci nodes in stage 2 translation.
> 
> GITS_ITRANSLATER space (4k( must be programmed in Stage2 translation so
> that MSI/X
> must work. This is done in vits intitialization in dom0/domU.
> 
> 4. DomU access / assignment PCI device
> --------------------------------------
> When a device is attached to a domU, provision has to be made such that
> it can
> access the MMIO space of the device and xen is able to identify the mapping
> between guest bdf and system bdf. Two hypercalls are introduced
> 
> #define PHYSDEVOP_map_mmio              40
> #define PHYSDEVOP_unmap_mmio            41
> struct physdev_map_mmio {
>      /* IN */
>      uint64_t addr;
>      uint64_t size;
> };
> Xen adds the mmio space to the stage2 translation for domU. The
> restrction is
> that xen creates 1:1 mapping of the MMIO address.
> 
> #define PHYSDEVOP_map_sbdf              43
> struct physdev_map_sbdf {
>          int domain_id;
>          int sbdf_s;
>          int sbdf_b;
>          int sbdf_d;
>          int sbdf_f;
> 
>          int gsbdf_s;
>          int gsbdf_b;
>          int gsbdf_d;
>          int gsbdf_f;
> };
> 
> Each domain has a pdev list, which contains the list of all pci devices. The
> pdev structure already has a sbdf information. The arch_pci_dev is
> updated to
> contain the gsbdf information. (gs- guest segment id)
> 
> Whenever there is trap from guest or an interrupt has to be injected,
> the pdev
> list is iterated to find the gsbdf.
> 
> Change in PCI ForntEnd - backend driver for MSI/X programming
> -------------------------------------------------------------
> On the Pci frontend bus a msi-parent as gicv3-its is added. As there is
> a single
> virtual its for a domU, as there is only a single virtual pci bus in
> domU. This
> ensures that the config_msi calls are handled by the gicv3 its driver in
> domU
> kernel and not utilizing frontend-backend communication between dom0-domU.
> 
> 5. NUMA domU and vITS
> -----------------------------
> a) On NUMA systems domU still have a single its node.
> b) How can xen identify the ITS on which a device is connected.
> - Using segment number query using api which gives pci host controllers
> device node
> 
> struct dt_device_node* pci_hostbridge_dt_node(uint32_t segno)
> 
> c) Query the interrupt parent of the pci device node to find out the its.
> 
> 6. DomU Bootup flow
> ---------------------
> a. DomU boots up without any pci devices assigned. A daemon listens to
> events
> from the xenstore. When a device is attached to domU, the frontend pci
> bus driver
> starts enumerating the devices.Front end driver communicates with
> backend driver
> in dom0 to read the pci config space.
> b. Device driver of the specific pci device invokes methods to configure
> the
> msi/x interrupt which are handled by the its driver in domU kernel. The
> read/writes
> by the its driver are trapped in xen. ITS driver finds out the actual
> sbdf based
> on the map_sbdf hypercall information.
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxx
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.