[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] PCI Pass-through in Xen ARM: Draft 4



               >>> On 13.08.15 at 11:42, <mjaggi@xxxxxxxxxxxxxxxxxx> wrote:
>   2.1    pci_hostbridge and pci_hostbridge_ops
>   
> -----------------------------------------------------------------------------
>   The init function in the PCI host driver calls to register hostbridge
>   callbacks:
> 
>   int pci_hostbridge_register(pci_hostbridge_t *pcihb);
> 
>   struct pci_hostbridge_ops {
>       u32 (*pci_conf_read)(struct pci_hostbridge*, u32 bus, u32 devfn,
>                                   u32 reg, u32 bytes);
>       void (*pci_conf_write)(struct pci_hostbridge*, u32 bus, u32 devfn,
>                                   u32 reg, u32 bytes, u32 val);
>   };
> 
>   struct pci_hostbridge{
>       u32 segno;
>       paddr_t cfg_base;
>       paddr_t cfg_size;
>       struct dt_device_node *dt_node;
>       struct pci_hostbridge_ops ops;
>       struct list_head list;
>   };
> 
>   A PCI conf_read function would internally be as follows:
>   u32 pcihb_conf_read(u32 seg, u32 bus, u32 devfn,u32 reg, u32 bytes)
>   {
>       pci_hostbridge_t *pcihb;
>       list_for_each_entry(pcihb, &pci_hostbridge_list, list)
>       {
>           if(pcihb-segno == seg)
>               return pcihb-ops.pci_conf_read(pcihb, bus, devfn, reg, bytes);
>       }
>       return -1;
>   }

Which implies 1 segment per host bridge, which doesn't seem too
nice to me: I can't see why a bridge might not cover more than one
segment, and I also can't see why you shouldn't be able to put
multiple bridges in the same segment when the number of busses
they have is small.

>   2.2    PHYSDEVOP_pci_host_bridge_add hypercall
>   
> -----------------------------------------------------------------------------
>   Xen code accesses PCI configuration space based on the sbdf received from
>   the guest.

Guest? Not Dom0?

>   4.1.1.2 For DomU
>   
> -----------------------------------------------------------------------------
>   For domU, while creating the domain, the toolstack reads the IPA from the
>   macro GITS_ITRANSLATER_SPACE from xen/include/public/arch-arm.h. The PA is
>   read from a new hypercall which returns PA of GITS_ITRANSLATER_SPACE.
> 
>   Subsequently toolstack sends a hypercall to create a stage 2 mapping.
> 
>   Hypercall Details: XEN_DOMCTL_get_itranslater_space
> 
>   /* XEN_DOMCTL_get_itranslater_space */
>   struct xen_domctl_get_itranslater_space {
>       /* OUT variables. */
>       uint64_aligned_t start_addr;
>       uint64_aligned_t size;
>   };

Is this a domain specific rather than a globally unique range?

>   4.2.1 Mapping BAR regions in guest address space
>   
> -----------------------------------------------------------------------------
>   When a PCI-EP device is assigned to a domU the toolstack will read the pci
>   configuration space BAR registers. Toolstack allocates a virtual BAR
>   region for each BAR region, from the area reserved in guest address 
> space for
>   mapping BARs referred to as Guest BAR area. This area is defined in
>   public/arch-arm.h
> 
>   /* For 32bit BARs*/
>   #define GUEST_BAR_BASE_32 <<>>
>   #define GUEST_BAR_SIZE_32 <<>>
> 
>   /* For 64bit BARs*/
>   #define GUEST_BAR_BASE_64 <<>>
>   #define GUEST_BAR_SIZE_64 <<>>

This sounds like you want to use a pair of fixed (forever, since putting
it in the ABI) ranges - is that really a good idea? Wouldn't it be possible
(and necessary when assigning many or large devices) to establish
these ranges dynamically?

Furthermore OSes can generally reassign BARs as they see fit (as
long as the chosen addresses don't collide with anything else).

>   4.2.2    Xenstore Update: For each PCI-EP BAR (IPA-PA mapping info).
>   ----------------------------------------------------------------------------
>   Toolstack also updates the xenstore information for the device
>   (virtualbar:physical bar).This information is read by xen-pciback and
>   returned to the domU-pcifront driver configuration space reads for BAR.
> 
>   Entries created are as follows:
>   /local/domain/0/backend/pci/1/0
>   vdev-N
>       BDF = ""
>       BAR-0-IPA = ""
>       BAR-0-PA = ""
>       BAR-0-SIZE = ""
>       ...
>       BAR-M-IPA = ""
>       BAR-M-PA = ""
>       BAR-M-SIZE = ""

This again looks like it wouldn't cope with the guest reassigning
BARs (since how would the toolstack know). Or did you mean to
say that in case of reassignment pciback also updates these
entries?

But I wonder in general why this would need to be done through
xenstore: pciback knows the real BAR base, and with the help of
the hypervisor it ought to be able to also know the corresponding
guest address.

>   4.2.3 Hypercall Modification (XEN_DOMCTL_assign_device)
>   ----------------------------------------------------------------------------
>   For machine:sbdf guest:sbdf needs to be generated when a device is 
> assigned
>   to a domU. Currently this is done by xen-pciback. As per discussions [4]
>   on xen-devel the df generation should be done by toolstack rather than
>   the xen-pciback.

And this is already the case for HVM guests (just that the assignment
is done in qemu, which still is part of the tool stack). Iirc only PV
passthrough uses pciback for this.

>   Since there is only one pci-frontend bus in domU, s:b:d.f is 0:0:d.f
>   It is proposed in this design document that the df generation be done by
>   toolstack and the xenstore keys be created by toolstack.
> 
>   Folowing guest_sbdf generation the domctl to assign the device is invoked.
>   This hypercall is updated to include *guest_sbdf*. Xen ITS driver can 
> store
>   this mapping domID: guest_sbdf: machine_sbdf and can be used later.

Mind explaining why the hypervisor needs to know about the guest
topology? Is that because you want to continue to not involve qemu
in any of the emulation?

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.