[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [early RFC] ARM PCI Passthrough design document



On Wed, 1 Feb 2017, Julien Grall wrote:
> Hi Edgar,
> 
> On 31/01/2017 19:06, Edgar E. Iglesias wrote:
> > On Tue, Jan 31, 2017 at 05:09:53PM +0000, Julien Grall wrote:
> > > On 31/01/17 16:53, Edgar E. Iglesias wrote:
> > > > On Wed, Jan 25, 2017 at 06:53:20PM +0000, Julien Grall wrote:
> > > > > On 24/01/17 20:07, Stefano Stabellini wrote:
> > > > > > On Tue, 24 Jan 2017, Julien Grall wrote:
> > > > > For generic host bridge, the initialization is inexistent. However
> > > > > some host
> > > > > bridge (e.g xgene, xilinx) may require some specific setup and also
> > > > > configuring clocks. Given that Xen only requires to access the
> > > > > configuration
> > > > > space, I was thinking to let DOM0 initialization the host bridge. This
> > > > > would
> > > > > avoid to import a lot of code in Xen, however this means that we need
> > > > > to
> > > > > know when the host bridge has been initialized before accessing the
> > > > > configuration space.
> > > > 
> > > > 
> > > > Yes, that's correct.
> > > > There's a sequence on the ZynqMP that involves assiging Gigabit
> > > > Transceivers
> > > > to PCI (GTs are shared among PCIe, USB, SATA and the Display Port),
> > > > enabling clocks and configuring a few registers to enable ECAM and MSI.
> > > > 
> > > > I'm not sure if this could be done prior to starting Xen. Perhaps.
> > > > If so, bootloaders would have to know a head of time what devices
> > > > the GTs are supposed to be configured for.
> > > 
> > > I've got further questions regarding the Gigabit Transceivers. You mention
> > > they are shared, do you mean that multiple devices can use a GT at the
> > > same
> > > time? Or the software is deciding at startup which device will use a given
> > > GT? If so, how does the software make this decision?
> > 
> > Software will decide at startup. AFAIK, the allocation is normally done
> > once but I guess that in theory you could design boards that could switch
> > at runtime. I'm not sure we need to worry about that use-case though.
> > 
> > The details can be found here:
> > https://www.xilinx.com/support/documentation/user_guides/ug1085-zynq-ultrascale-trm.pdf
> > 
> > I suggest looking at pages 672 and 733.
> 
> Thank you for the documentation. I am trying to understand if we could move
> initialization in Xen as suggested by Stefano. I looked at the driver in Linux
> and the code looks simple not many dependencies. However, I was not able to
> find where the Gigabit Transceivers are configured. Do you have any link to
> the code for that?
> 
> This would also mean that the MSI interrupt controller will be moved in Xen.
> Which I think is a more sensible design (see more below).
> 
> > > 
> > > > >       - For all other host bridges => I don't know if there are host
> > > > > bridges
> > > > > falling under this category. I also don't have any idea how to handle
> > > > > this.
> > > > > 
> > > > > > 
> > > > > > Otherwise, if Dom0 is the only one to drive the physical host
> > > > > > bridge,
> > > > > > and Xen is the one to provide the emulated host bridge, how are DomU
> > > > > > PCI
> > > > > > config reads and writes supposed to work in details?
> > > > > 
> > > > > I think I have answered to this question with my explanation above.
> > > > > Let me
> > > > > know if it is not the case.
> > > > > 
> > > > > > How is MSI configuration supposed to work?
> > > > > 
> > > > > For GICv3 ITS, the MSI will be configured with the eventID (it is uniq
> > > > > per-device) and the address of the doorbell. The linkage between the
> > > > > LPI and
> > > > > "MSI" will be done through the ITS.
> > > > > 
> > > > > For GICv2m, the MSI will be configured with an SPIs (or offset on some
> > > > > GICv2m) and the address of the doorbell. Note that for DOM0 SPIs are
> > > > > mapped
> > > > > 1:1.
> > > > > 
> > > > > So in both case, I don't think it is necessary to trap MSI
> > > > > configuration for
> > > > > DOM0. This may not be true if we want to handle other MSI controller.
> > > > > 
> > > > > I have in mind the xilinx MSI controller (embedded in the host bridge?
> > > > > [4])
> > > > > and xgene MSI controller ([5]). But I have no idea how they work and
> > > > > if we
> > > > > need to support them. Maybe Edgar could share details on the Xilinx
> > > > > one?
> > > > 
> > > > 
> > > > The Xilinx controller has 2 dedicated SPIs and pages for MSIs. AFAIK,
> > > > there's no
> > > > way to protect the MSI doorbells from mal-configured end-points raising
> > > > malicious EventIDs.
> > > > So perhaps trapped config accesses from domUs can help by adding this
> > > > protection
> > > > as drivers configure the device.
> > > > 
> > > > On Linux, Once MSI's hit, the kernel takes the SPI interrupts, reads
> > > > out the EventID from a FIFO in the controller and injects a new IRQ into
> > > > the kernel.
> > > 
> > > It might be early to ask, but how do you expect  MSI to work with DOMU on
> > > your hardware? Does your MSI controller supports virtualization? Or are
> > > you
> > > looking for a different way to inject MSI?
> > 
> > MSI support in HW is quite limited to support domU and will require SW hacks
> > :-(
> > 
> > Anyway, something along the lines of this might work:
> > 
> > * Trap domU CPU writes to MSI descriptors in config space.
> >   Force real MSI descriptors to the address of the door bell area.
> >   Force real MSI descriptors to use a specific device unique Event ID
> > allocated by Xen.
> >   Remember what EventID domU requested per device and descriptor.
> > 
> > * Xen or Dom0 take the real SPI generated when device writes into the
> > doorbell area.
> >   At this point, we can read out the EventID from the MSI FIFO and map it to
> > the one requested from domU.
> >   Xen or Dom0 inject the expected EventID into domU
> > 
> > Do you have any good ideas? :-)
> 
> From my understanding your MSI controller is embedded in the hostbridge,
> right? If so, the MSIs would need to be handled where the host bridge will be
> initialized (e.g either Xen or DOM0).
> 
> From a design point of view, it would make more sense to have the MSI
> controller driver in Xen as the hostbridge emulation for guest will also live
> there.
> 
> So if we receive MSI in Xen, we need to figure out a way for DOM0 and guest to
> receive MSI. The same way would be the best, and I guess non-PV if possible. I
> know you are looking to boot unmodified OS in a VM. This would mean we need to
> emulate the MSI controller and potentially xilinx PCI controller. How much are
> you willing to modify the OS?
> 
> Regarding the MSI doorbell, I have seen it is configured by the software using
> a physical address of a page allocated in the RAM. When the PCI devices is
> writing into the doorbell does the access go through the SMMU?
> 
> Regardless the answer, I think we would need to map the MSI doorbell page in
> the guest.

Why? We should be able to handle the case by trapping and emulating PCI
config accesses. Xen can force the real MSI descriptors to use whatever
Xen wants them to use. With an SMMU, we need to find a way to map the
MSI doorbell in the SMMU pagetable to allow the device to write to it.
Without SMMU, it's unneeded.


> Meaning that even if we trap MSI configuration access, a guess
> could DMA in the page. So if I am not mistaken, MSI would be insecure in this
> case :/.

That's right: if a device capable of DMA to an arbitrary address in
memory is assigned to the guest, the guest can write to the MSI doorbell
if an SMMU is present, otherwise, the guest can write to any address in
memory without SMMU. Completely insecure.

It is the same security compromised offered by PV PCI passthrough today
with no VT-D on the platform. I think it's still usable in some cases,
but we need to be very clear about its security properties.


> Or maybe we could avoid mapping the doorbell in the guest and let Xen receive
> an SMMU abort. When receiving the SMMU abort, Xen could sanitize the value and
> write into the real MSI doorbell. Not sure if it would works thought.

I thought that SMMU aborts are too slow for this?

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.