[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [RFC + Queries] Flow of PCI passthrough in ARM



On Thu, 18 Sep 2014, manish jaggi wrote:
> Hi,
> Below is the flow I am working on, Please provide your comments, I
> have a couple of queries as well..
> 
> a) Device tree has smmu nodes and each smmu node has the mmu-master property.
> In our Soc DT the mmu-master is a pcie node in device tree.

Do you mean that both the smmu nodes and the pcie node have the
mmu-master property? The pcie node is the pcie root complex, right?


> b) Xen parses the device tree and prepares a list which stores the pci
> device tree node pointers. The order in device tree is mapped to
> segment number in subsequent calls. For eg 1st pci node found is
> segment 0, 2nd segment 1

What's a segment number? Something from the PCI spec?
If you have several pci nodes on device tree, does that mean that you
have several different pcie root complexes?


> c) During SMMU init the pcie nodes in DT are saved as smmu masters.

At this point you should also be able to find via DT the stream-id range
supported by each SMMU and program the SMMU with them, assigning
everything to dom0.


> d) Dom0 Enumerates PCI devices, calls hypercall PHYSDEVOP_pci_device_add.
>  - In Xen the SMMU iommu_ops add_device is called. I have implemented
> the add_device function.
> - In the add_device function
>  the segment number is used to locate the device tree node pointer of
> the pcie node which helps to find out the corresponding smmu.
> - In the same PHYSDEVOP the BAR regions are mapped to Dom0.
> 
> Note: The current SMMU driver maps the complete Domain's Address space
> for the device in SMMU hardware.
> 
> The above flow works currently for us.

It would be nice to be able to skip d): in a system where all dma capable
devices are behind smmus, we should be capable of booting dom0 without
the 1:1 mapping hack. If we do that, it would be better to program the
smmus before booting dom0. Otherwise there is a risk that dom0 is going
to start using these devices and doing dma before we manage to secure
the devices via smmus.
                          
Of course we can do that if there are no alternatives. But in our case
we should be able to extract the stream-ids from device tree and program
the smmus right away, right?  Do we really need to wait for dom0 to call
PHYSDEVOP_pci_device_add? We could just assign everything to dom0 for a
start.

I would like to know from the x86 guys, if this is really how it is
supposed to work on PVH too. Do we rely on PHYSDEVOP_pci_device_add to
program the IOMMU?


> Now when I call pci-assignable-add I see that the iommu_ops
> remove_device in smmu driver is not called. If that is not called the
> SMMU would still have the dom0 address space mappings for that device
> 
> Can you please suggest the best place (kernel / xl-tools) to put the
> code which would call the remove_device in iommu_opps in the control
> flow from pci-assignable-add.
> 
> One way I see is to introduce a DOMCTL_iommu_remove_device in
> pci-assignable-add / pci-detach and DOMCTL_iommu_add_device in
> pci-attach. Is that a valid approach  ?

I am not 100% sure, but I think that before assigning a PCI device to
another guest, you are supposed to bind the device to xen-pciback (see
drivers/xen/xen-pciback, also see
http://wiki.xen.org/wiki/Xen_PCI_Passthrough). The pciback driver is
going hide the device from dom0 and as a consequence
drivers/xen/pci.c:xen_remove_device ends up being called, that issues a
PHYSDEVOP_pci_device_remove hypercall.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.