[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [early RFC] ARM PCI Passthrough design document

Hi Roger,

On 01/02/17 10:55, Roger Pau Monné wrote:
On Wed, Jan 25, 2017 at 06:53:20PM +0000, Julien Grall wrote:
Hi Stefano,

On 24/01/17 20:07, Stefano Stabellini wrote:
On Tue, 24 Jan 2017, Julien Grall wrote:
whilst for Device Tree the segment number is not available.

So Xen needs to rely on DOM0 to discover the host bridges and notify Xen
with all the relevant informations. This will be done via a new hypercall
PHYSDEVOP_pci_host_bridge_add. The layout of the structure will be:

I understand that the main purpose of this hypercall is to get Xen and Dom0
agree on the segment numbers, but why is it necessary? If Dom0 has an
emulated contoller like any other guest, do we care what segment numbers
Dom0 will use?

I was not planning to have a emulated controller for DOM0. The physical one is
not necessarily ECAM compliant so we would have to either emulate the physical
one (meaning multiple different emulation) or an ECAM compliant.

The latter is not possible because you don't know if there is enough free MMIO
space for the emulation.

In the case on ARM, I don't see much the point to emulate the host bridge for
DOM0. The only thing we need in Xen is to access the configuration space, we
don't have about driving the host bridge. So I would let DOM0 dealing with

Also, I don't see any reason for ARM to trap DOM0 configuration space access.
The MSI will be configured using the interrupt controller and it is a trusted

These last you sentences raise a lot of questions. Maybe I am missing
something. You might want to clarify the strategy for Dom0 and DomUs,
and how they differ, in the next version of the doc.

At some point you wrote "Instantiation of a specific driver for the host
controller can be easily done if Xen has the information to detect it.
However, those drivers may require resources described in ASL." Does it
mean you plan to drive the physical host bridge from Xen and Dom0

I may miss some bits, so feel free to correct me if I am wrong.

My understanding is host bridge can be divided in 2 parts:
        - Initialization of the host bridge
        - Access the configuration space

For generic host bridge, the initialization is inexistent. However some host
bridge (e.g xgene, xilinx) may require some specific setup and also
configuring clocks. Given that Xen only requires to access the configuration
space, I was thinking to let DOM0 initialization the host bridge. This would
avoid to import a lot of code in Xen, however this means that we need to
know when the host bridge has been initialized before accessing the
configuration space.

Can the bridge be initialized without Dom0 having access to the ECAM area? If
that's possible I would do:

1. Dom0 initializes the bridge (whatever that involves).
2. Dom0 calls PHYSDEVOP_pci_mmcfg_reserved to register the bridge with Xen:
 2.1 Xen scans the bridge and detects the devices.
 2.2 Xen maps the ECAM area into Dom0 stage-2 p2m.
3. Dom0 scans the bridge &c (whatever is done on native).

As Stefano suggested, we should try to initialize the hostbridge in Xen when possible. This will avoid a split interaction and our hair too :).

I am looking at different hostbridge to see how much code would be required in Xen to handle them. I think the Xilinx root-complex is an easy one (see the discussion in [1]) and it is manageable to get the code in Xen.

But some are much more complex, for instance the R-Car (see discussion in [2]) requires clocks, use a specific way to access configuration space and has the MSI controller integrated in the root complex. This would require some work with DOM0. I will mention the problem in the design document but not going to address it at the moment (too complex). Although, we would have to support it at some point as the root complex is used in automotive board (see [3]).

For now I will address:
        - ECAM compliant/ECAM like root complex
        - Root complex with simple initialization

For DT, I would have a fallback on mapping the root complex to DOM0 if we don't support it. So DOM0 could still use PCI.

For ACPI, I am expecting all the platform ECAM compliant or require few quirks. So I would mandate the support of the root complex in Xen in order to get PCI supported.

Now regarding the configuration space, I think we can divide in 2 category:
        - indirect access, the configuration space are multiplexed. An example
would be the legacy method on x86 (e.g 0xcf8 and 0xcfc). A similar method is
used for x-gene PCI driver ([1]).
        - ECAM like access, where each PCI configuration space will have it is 
address space. I said "ECAM like" because some host bridge will require some
bits fiddling when accessing register (see thunder-ecam [2])

There are also host bridges that mix both indirect access and ECAM like
access depending on the device configuration space accessed (see thunder-pem

Hay! Sounds like fun...

When using ECAM like host bridge, I don't think it will be an issue to have
both DOM0 and Xen accessing configuration space at the same time. Although,
we need to define who is doing what. In general case, DOM0 should not
touched an assigned PCI device. The only possible interaction would be
resetting a device (see my answer below).

Iff Xen is really going to perform the reset of passthrough devices, then I
don't see any reason to expose those devices to Dom0 at all, IMHO you should
hide them from ACPI and ideally prevent Dom0 from interacting with them using
the PCI configuration space (although that would require trapping on accesses
to the PCI config space, which AFAIK you would like to avoid).

I was effectively thinking to avoid trapping PCI config space, but you and Stefano changed my mind. It does not cost too much to trap ECAM access and would be necessary on non-ECAM one.

This will also simplify the way to hide a PCI to DOM0. Xen can do it by making the config space of the device unavailable to DOM0 (similar to pciback.hide options today).


[1] <a1120a60-b859-c7ff-9d4a-553c330669f1@xxxxxxxxxx>
[2] <616043e2-82d6-9f64-94fc-5c836d41818f@xxxxxxxxxx>
[3] https://www.renesas.com/en-us/solutions/automotive/products/rcar-h3.html

Julien Grall

Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.