[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: PCI passthrough on arm Design Session MoM

  • To: Roger Pau Monné <roger.pau@xxxxxxxxxx>
  • From: Bertrand Marquis <Bertrand.Marquis@xxxxxxx>
  • Date: Thu, 9 Jul 2020 10:29:52 +0000
  • Accept-language: en-GB, en-US
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass header.d=arm.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=V70iKL15+X3CYs9tSao7Y8W3ZX5EWgAu8UTy9G+d8Bw=; b=ZvCVKHxu4tIEnZN9cZhCz1HmIr9XW/bidUOeu372bNaoo2f8Ztn/JU5at/oQOpBCs/Xv57pobEOhfa79jOg+GtGPFgGVP/Qmt/43V3m5WHzGiX+BD4VdVqSE9UTWQWxrD98Sq2QF4rEutv3lKNEH/D59unz+vBNxgKVpfm1g745eisH1TVwdSoRaimA43tLXP6jjgVN6O79AT/p/rV1bpJzd+fjz9YEIZq3PcnZU2ZUtrxOnIV03zpcdiav2hLWYlUv5o9efZZFoeBzVvpNfErwAv2t5Kq9jO99eOWqqDu0nj6YN4V+tJSriDYNgM2OtsGV1ZSWQLwxtFRsggeE2TQ==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=I8m6CqWCn1OlkqPjbfoBY8FmGq+7Fxo21/RX8YG/csRCKjs0ZsuTK2m+XSco8OwYf6dLl2u4NMYG+LUinUkQkdZF7fliR0zmenB/X4Q9izXPTMQIGu1V8oKkQHZuSr4LxXGyls67N45oPy02X/20/iVwoZnkP/CaeOG0R8tXGlj0Vz7pXhp6qD4s0GXODokyXJaT4AYArW7Qdb5FORDKD7n7MCc7pko4tlr6dJfRj+qQcfsqfT+x0cGJ4Ikq6W4qTbayCejqpCt7W9AISkQixt3wXNmUgSSfUfMvIO2o2qRrfsAweoQ/KthGm7WkwWyrgEjx2LbHu69oNeQuPcOoXw==
  • Authentication-results-original: citrix.com; dkim=none (message not signed) header.d=none;citrix.com; dmarc=none action=none header.from=arm.com;
  • Cc: Xen-devel <xen-devel@xxxxxxxxxxxxxxxxxxxx>, nd <nd@xxxxxxx>, Rahul Singh <Rahul.Singh@xxxxxxx>
  • Delivery-date: Thu, 09 Jul 2020 10:30:22 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>
  • Nodisclaimer: true
  • Original-authentication-results: citrix.com; dkim=none (message not signed) header.d=none;citrix.com; dmarc=none action=none header.from=arm.com;
  • Thread-index: AQHWVScSHsZRWGBXYkau2LjYpe8C1aj9rgiAgAFfbAA=
  • Thread-topic: PCI passthrough on arm Design Session MoM

> On 8 Jul 2020, at 14:32, Roger Pau Monné <roger.pau@xxxxxxxxxx> wrote:
> On Wed, Jul 08, 2020 at 12:55:36PM +0000, Bertrand Marquis wrote:
>> Hi,
>> Here are the notes we took during the design session around PCI devices 
>> passthrough on Arm.
>> Feel free to comment or add anything :-)
>> Bertrand
>> PCI devices passthrough on Arm Design Session
>> ======================================
>> Date: 7/7/2020
>> - X86 VPCI support  is for the PVH guest .
> Current vPCI is only for PVH dom0. We need to decide what to do for
> PVH domUs, whether we want to use vPCI or xenpt from Paul:
> http://xenbits.xen.org/gitweb/?p=people/pauldu/xenpt.git;a=summary
> Or something else. I think this decision also needs to take into
> account Arm.

We are currently using vpci for guests.
But we could also look into xenpt but from a quick check it does require a Dom0 
which would defeat the Dom0less use case.

>> - X86 PCI devices discovery code should be checked and maybe used on Arm as 
>> it is not very complex
>>      - Remark from Julien: This might not work in number of cases
>> - Sanitation of each the PCI access for each guest in XEN is required
>> - MSI trap is not required for gicv3 but it required for gicv2m
>>      - We do not plan to support non ITS GIC
>> - Check possibility to add some specifications in EBBR for PCI enumeration 
>> (address assignment part)
>> - PCI enumeration support should not depend on DOM0 for safety reasons
>> - PCI enumeration could be done in several places
>>      - DTB, with some entries giving values to be applied by Xen
>>      - In XEN (complex, not wanted out of devices discovery)
>>      - In Firmware and then xen device discovery
>> - As per Julien it is difficult to tell the XEN on which segment PCI device 
>> is present
>>      - Current test implementation is done on Juno where there is only one 
>> segment
>>      - This should be investigated with an other hardware in the next months
> I'm not sure the segments used by Xen need to match the segments used
> by the guest. This is just an abstract value assigned from the OS (or
> Xen) in order to differentiate different MMCFG (ECAM) regions, and
> whether such numbers match doesn't seem relevant to me, as at the end
> Xen will trap ECAM accesses and map such accesses to the Xen assigned
> segments.
> Segments matching between the OS and Xen is only relevant when PCI
> information needs to be conveyed between the OS and Xen using some
> kind of hypercall, but I think you want to avoid using such side-band
> communication channels anyway?

We definitely want to avoid them.
On the juno board we use currenlty this question was ignored for now as there 
is only one region.
This is definitely something we need to investigate.

>> - Julien mentioned that clocks issues will be complex to solve and most 
>> hardware are not following the ECAM standard
>> - Julien mentioned that Linux and Xen could do the enumeration in a 
>> different way, making it complex to have linux doing an enumeration after Xen
>> - We should push the code we have ASAP on the mailing list for a review and 
>> discussion on the design
>>      - Arm will try to do that before end of July
> I will be happy to give it a look and provide feedback.

Thanks, we will try to push our status for the end of next week.

> For such complex pieces of work I would recommend to first send some
> kind of document to the mailing list in order to make sure the
> direction taken is accepted by the community, and we can also provide
> feedback or point to existing components that can be helpful :). If
> you have code done already that's also fine, feel free to send it.

We have some code done already but we will definitely spend some time into 
writing a design to agree on before going to far.
There are still some areas that we want to check technically for feasibility 
(regions, MSI, clocks for example).

Thanks a lot for your feedback



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.