[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: PCI passthrough on arm Design Session MoM


  • To: Bertrand Marquis <Bertrand.Marquis@xxxxxxx>
  • From: Roger Pau Monné <roger.pau@xxxxxxxxxx>
  • Date: Wed, 8 Jul 2020 15:32:05 +0200
  • Authentication-results: esa3.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
  • Cc: Xen-devel <xen-devel@xxxxxxxxxxxxxxxxxxxx>, nd <nd@xxxxxxx>, Rahul Singh <Rahul.Singh@xxxxxxx>
  • Delivery-date: Wed, 08 Jul 2020 13:32:16 +0000
  • Ironport-sdr: bqgUmpPKJkm1N4Qd6szRWO97fo3vYR1GrX4nBemd9OaCzJzp+YxJazmJ2Ygqbtl3mNA6ioEesY uU891TXo8rqc6kw+8BxYMxsqI3PgyQSIvLRVf8Ijb8dBypPr1Ruqauj6kJpGmDhcO3NN2A8id/ 2yA/kp2DJSeyKGXoTH68EfKYbI6Xyi/Lw3RnM+IZ4iWWV3Y+hEDFfs/3HngX84ap/HrfNGiYwy DaCDUnbKs2WgGMwJas/1eD3W15gtWpaURXRUOTVc/jq8jkVRbx4cVQ4w2m/RPcV/+82kYHPBUF nUI=
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On Wed, Jul 08, 2020 at 12:55:36PM +0000, Bertrand Marquis wrote:
> Hi,
> 
> Here are the notes we took during the design session around PCI devices 
> passthrough on Arm.
> Feel free to comment or add anything :-)
> 
> Bertrand
> 
> PCI devices passthrough on Arm Design Session
> ======================================
> 
> Date: 7/7/2020
> 
> - X86 VPCI support  is for the PVH guest .

Current vPCI is only for PVH dom0. We need to decide what to do for
PVH domUs, whether we want to use vPCI or xenpt from Paul:

http://xenbits.xen.org/gitweb/?p=people/pauldu/xenpt.git;a=summary

Or something else. I think this decision also needs to take into
account Arm.

> - X86 PCI devices discovery code should be checked and maybe used on Arm as 
> it is not very complex
>       - Remark from Julien: This might not work in number of cases
> - Sanitation of each the PCI access for each guest in XEN is required
> - MSI trap is not required for gicv3 but it required for gicv2m
>       - We do not plan to support non ITS GIC
> - Check possibility to add some specifications in EBBR for PCI enumeration 
> (address assignment part)
> - PCI enumeration support should not depend on DOM0 for safety reasons
> - PCI enumeration could be done in several places
>       - DTB, with some entries giving values to be applied by Xen
>       - In XEN (complex, not wanted out of devices discovery)
>       - In Firmware and then xen device discovery
> - As per Julien it is difficult to tell the XEN on which segment PCI device 
> is present
>       - Current test implementation is done on Juno where there is only one 
> segment
>       - This should be investigated with an other hardware in the next months

I'm not sure the segments used by Xen need to match the segments used
by the guest. This is just an abstract value assigned from the OS (or
Xen) in order to differentiate different MMCFG (ECAM) regions, and
whether such numbers match doesn't seem relevant to me, as at the end
Xen will trap ECAM accesses and map such accesses to the Xen assigned
segments.

Segments matching between the OS and Xen is only relevant when PCI
information needs to be conveyed between the OS and Xen using some
kind of hypercall, but I think you want to avoid using such side-band
communication channels anyway?

> - Julien mentioned that clocks issues will be complex to solve and most 
> hardware are not following the ECAM standard
> - Julien mentioned that Linux and Xen could do the enumeration in a different 
> way, making it complex to have linux doing an enumeration after Xen
> - We should push the code we have ASAP on the mailing list for a review and 
> discussion on the design
>       - Arm will try to do that before end of July

I will be happy to give it a look and provide feedback.

For such complex pieces of work I would recommend to first send some
kind of document to the mailing list in order to make sure the
direction taken is accepted by the community, and we can also provide
feedback or point to existing components that can be helpful :). If
you have code done already that's also fine, feel free to send it.

Thanks, Roger.



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.