[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [early RFC] ARM PCI Passthrough design document



On Tue, 31 Jan 2017, Edgar E. Iglesias wrote:
> On Tue, Jan 31, 2017 at 05:09:53PM +0000, Julien Grall wrote:
> > Hi Edgar,
> > 
> > Thank you for the feedbacks.
> 
> Hi Julien,
> 
> > 
> > On 31/01/17 16:53, Edgar E. Iglesias wrote:
> > >On Wed, Jan 25, 2017 at 06:53:20PM +0000, Julien Grall wrote:
> > >>On 24/01/17 20:07, Stefano Stabellini wrote:
> > >>>On Tue, 24 Jan 2017, Julien Grall wrote:
> > >>For generic host bridge, the initialization is inexistent. However some 
> > >>host
> > >>bridge (e.g xgene, xilinx) may require some specific setup and also
> > >>configuring clocks. Given that Xen only requires to access the 
> > >>configuration
> > >>space, I was thinking to let DOM0 initialization the host bridge. This 
> > >>would
> > >>avoid to import a lot of code in Xen, however this means that we need to
> > >>know when the host bridge has been initialized before accessing the
> > >>configuration space.
> > >
> > >
> > >Yes, that's correct.
> > >There's a sequence on the ZynqMP that involves assiging Gigabit 
> > >Transceivers
> > >to PCI (GTs are shared among PCIe, USB, SATA and the Display Port),
> > >enabling clocks and configuring a few registers to enable ECAM and MSI.
> > >
> > >I'm not sure if this could be done prior to starting Xen. Perhaps.
> > >If so, bootloaders would have to know a head of time what devices
> > >the GTs are supposed to be configured for.
> > 
> > I've got further questions regarding the Gigabit Transceivers. You mention
> > they are shared, do you mean that multiple devices can use a GT at the same
> > time? Or the software is deciding at startup which device will use a given
> > GT? If so, how does the software make this decision?
> 
> Software will decide at startup. AFAIK, the allocation is normally done
> once but I guess that in theory you could design boards that could switch
> at runtime. I'm not sure we need to worry about that use-case though.
> 
> The details can be found here:
> https://www.xilinx.com/support/documentation/user_guides/ug1085-zynq-ultrascale-trm.pdf
> 
> I suggest looking at pages 672 and 733.
> 
> 
> 
> > 
> > >>  - For all other host bridges => I don't know if there are host bridges
> > >>falling under this category. I also don't have any idea how to handle 
> > >>this.
> > >>
> > >>>
> > >>>Otherwise, if Dom0 is the only one to drive the physical host bridge,
> > >>>and Xen is the one to provide the emulated host bridge, how are DomU PCI
> > >>>config reads and writes supposed to work in details?
> > >>
> > >>I think I have answered to this question with my explanation above. Let me
> > >>know if it is not the case.
> > >>
> > >>> How is MSI configuration supposed to work?
> > >>
> > >>For GICv3 ITS, the MSI will be configured with the eventID (it is uniq
> > >>per-device) and the address of the doorbell. The linkage between the LPI 
> > >>and
> > >>"MSI" will be done through the ITS.
> > >>
> > >>For GICv2m, the MSI will be configured with an SPIs (or offset on some
> > >>GICv2m) and the address of the doorbell. Note that for DOM0 SPIs are 
> > >>mapped
> > >>1:1.
> > >>
> > >>So in both case, I don't think it is necessary to trap MSI configuration 
> > >>for
> > >>DOM0. This may not be true if we want to handle other MSI controller.
> > >>
> > >>I have in mind the xilinx MSI controller (embedded in the host bridge? 
> > >>[4])
> > >>and xgene MSI controller ([5]). But I have no idea how they work and if we
> > >>need to support them. Maybe Edgar could share details on the Xilinx one?
> > >
> > >
> > >The Xilinx controller has 2 dedicated SPIs and pages for MSIs. AFAIK, 
> > >there's no
> > >way to protect the MSI doorbells from mal-configured end-points raising 
> > >malicious EventIDs.
> > >So perhaps trapped config accesses from domUs can help by adding this 
> > >protection
> > >as drivers configure the device.
> > >
> > >On Linux, Once MSI's hit, the kernel takes the SPI interrupts, reads
> > >out the EventID from a FIFO in the controller and injects a new IRQ into
> > >the kernel.
> > 
> > It might be early to ask, but how do you expect  MSI to work with DOMU on
> > your hardware? Does your MSI controller supports virtualization? Or are you
> > looking for a different way to inject MSI?
> 
> MSI support in HW is quite limited to support domU and will require SW hacks 
> :-(
> 
> Anyway, something along the lines of this might work:
> 
> * Trap domU CPU writes to MSI descriptors in config space.
>   Force real MSI descriptors to the address of the door bell area.
>   Force real MSI descriptors to use a specific device unique Event ID 
> allocated by Xen.
>   Remember what EventID domU requested per device and descriptor.
> 
> * Xen or Dom0 take the real SPI generated when device writes into the 
> doorbell area.
>   At this point, we can read out the EventID from the MSI FIFO and map it to 
> the one requested from domU.
>   Xen or Dom0 inject the expected EventID into domU
> 
> Do you have any good ideas? :-)

That's pretty much the same workflow as for Xen on x86. It's doable, and
we already have a lot of code to implement it, although it is scattered
across Xen, Dom0, and QEMU, that is a pain. It's one of the reasons I am
insisting on having only one component owning PCI.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.