[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [early RFC] ARM PCI Passthrough design document



Hi Stefano,

On 31/01/2017 21:58, Stefano Stabellini wrote:
On Wed, 25 Jan 2017, Julien Grall wrote:
whilst for Device Tree the segment number is not available.

So Xen needs to rely on DOM0 to discover the host bridges and notify
Xen
with all the relevant informations. This will be done via a new
hypercall
PHYSDEVOP_pci_host_bridge_add. The layout of the structure will be:

I understand that the main purpose of this hypercall is to get Xen and
Dom0
to
agree on the segment numbers, but why is it necessary? If Dom0 has an
emulated contoller like any other guest, do we care what segment numbers
Dom0 will use?

I was not planning to have a emulated controller for DOM0. The physical
one is
not necessarily ECAM compliant so we would have to either emulate the
physical
one (meaning multiple different emulation) or an ECAM compliant.

The latter is not possible because you don't know if there is enough free
MMIO
space for the emulation.

In the case on ARM, I don't see much the point to emulate the host bridge
for
DOM0. The only thing we need in Xen is to access the configuration space,
we
don't have about driving the host bridge. So I would let DOM0 dealing with
that.

Also, I don't see any reason for ARM to trap DOM0 configuration space
access.
The MSI will be configured using the interrupt controller and it is a
trusted
Domain.

These last you sentences raise a lot of questions. Maybe I am missing
something. You might want to clarify the strategy for Dom0 and DomUs,
and how they differ, in the next version of the doc.

At some point you wrote "Instantiation of a specific driver for the host
controller can be easily done if Xen has the information to detect it.
However, those drivers may require resources described in ASL." Does it
mean you plan to drive the physical host bridge from Xen and Dom0
simultaneously?

I may miss some bits, so feel free to correct me if I am wrong.

My understanding is host bridge can be divided in 2 parts:
        - Initialization of the host bridge
        - Access the configuration space

For generic host bridge, the initialization is inexistent. However some host
bridge (e.g xgene, xilinx) may require some specific setup and also
configuring clocks. Given that Xen only requires to access the configuration
space, I was thinking to let DOM0 initialization the host bridge. This would
avoid to import a lot of code in Xen, however this means that we need to know
when the host bridge has been initialized before accessing the configuration
space.

I prefer to avoid a split-mind approach, where some PCI things are
initialized/owned by one component and some others are initialized/owned
by another component. It creates complexity. Of course, we have to face
the reality that the alternatives might be worse, but let's take a look
at the other options first.

How hard would it be to bring the PCI host bridge initialization in Xen,
for example in the case of the Xilinx ZynqMP? Traditionally, PCI host
bridges have not required any initialization on x86. PCI is still new to
the ARM ecosystems. I think it is reasonable to expect that going
forward, as the ARM ecosystem matures, PCI host bridges will require
little to no initialization on ARM too.

I would agree for servers but I am less sure for embedded systems. You may want to save address space or even power potentially requiring a custom hostbridge. I hope I am wrong here.

I think the xilinx host bridge is the simplest case. I am trying to understand better in a separate e-mail (see <a1120a60-b859-c7ff-9d4a-553c330669f1@xxxxxxxxxx>).

There are more complex hostbridge such as x-gene [1] and R-Car [2].
If we take the example of the renesas salvator board been used on automotive (Globallogic and Bosh are working on support for Xen [3]), it contains an R-Car PCI root complex, below a part of the DTS

/* External PCIe clock - can be overridden by the board */
pcie_bus_clk: pcie_bus {
                compatible = "fixed-clock";
                #clock-cells = <0>;
                clock-frequency = <0>;
};

pciec0: pcie@fe000000 {
        compatible = "renesas,pcie-r8a7795";
        reg = <0 0xfe000000 0 0x80000>;
        #address-cells = <3>;
        #size-cells = <2>;
        bus-range = <0x00 0xff>;
        device_type = "pci";
        ranges = <0x01000000 0 0x00000000 0 0xfe100000 0 0x00100000
                  0x02000000 0 0xfe200000 0 0xfe200000 0 0x00200000
                  0x02000000 0 0x30000000 0 0x30000000 0 0x08000000
                  0x42000000 0 0x38000000 0 0x38000000 0 0x08000000>;
                  /* Map all possible DDR as inbound ranges */
dma-ranges = <0x42000000 0 0x40000000 0 0x40000000 0 0x40000000>;
                  interrupts = <GIC_SPI 116 IRQ_TYPE_LEVEL_HIGH>,
                               <GIC_SPI 117 IRQ_TYPE_LEVEL_HIGH>,
                               <GIC_SPI 118 IRQ_TYPE_LEVEL_HIGH>;
                  #interrupt-cells = <1>;
                  interrupt-map-mask = <0 0 0 0>;
interrupt-map = <0 0 0 0 &gic GIC_SPI 116 IRQ_TYPE_LEVEL_HIGH>;
                  clocks = <&cpg CPG_MOD 319>, <&pcie_bus_clk>;
                  clock-names = "pcie", "pcie_bus";
                  power-domains = <&sysc R8A7795_PD_ALWAYS_ON>;
                  status = "disabled";
};

The PCI controller depends on 2 clocks, one of which requires a specific driver. It also contains a power domain, which I guess will require some configuration and would need to be shared with Linux.

Furthermore, the R-Car driver has a specific way to access the configuration space (see rcar_pcie_config_access). It is actually the first root complex I found falling under "For all host bridges" into the actually find a root complex falling under the category "For all other host bridges" on my previous mail.

Lastly, the MSI controller is integrated in the root complex here too.

So I think the R-car root complex is the kind of hardware that would require merge half of Linux in Xen and potentially emulate some part of the hardware (such as the clock) for DOM0.

I don't have any good idea here which does not involve DOM0. I would be happy to know what other people thinks.

Note that I don't think we can possibly say we don't support PCI passthrough.

Now regarding the configuration space, I think we can divide in 2 category:
        - indirect access, the configuration space are multiplexed. An example
would be the legacy method on x86 (e.g 0xcf8 and 0xcfc). A similar method is
used for x-gene PCI driver ([1]).
        - ECAM like access, where each PCI configuration space will have it is
own address space. I said "ECAM like" because some host bridge will require
some bits fiddling when accessing register (see thunder-ecam [2])

There are also host bridges that mix both indirect access and ECAM like access
depending on the device configuration space accessed (see thunder-pem [3]).

When using ECAM like host bridge, I don't think it will be an issue to have
both DOM0 and Xen accessing configuration space at the same time. Although, we
need to define who is doing what. In general case, DOM0 should not touched an
assigned PCI device. The only possible interaction would be resetting a device
(see my answer below).

Even if the hardware allows it, I think it is a bad idea to access the
same hardware component from two different entities simultaneously.

I suggest we trap Dom0 reads/writes to ECAM, and execute them in Xen,
which I think it's what x86 does today.

FWIW, Roger confirmed me IRL. So will update the design document to specific that DOM0 access are trapped, even if we may not take advantage of it today.



When using indirect access, we cannot let DOM0 and Xen accessing any PCI
configuration space at the same time. So I think we would have to emulate the
physical host controller.

Unless we have a big requirement to trap DOM0 access to the configuration
space, I would only keep the emulation to the strict minimum (e.g for indirect
access) to avoid ending-up handling all the quirks for ECAM like host bridge.

If we need to trap the configuration space, I would suggest the following for
ECAM like host bridge:
        - For physical host bridge that does not require initialization and is
nearly ECAM compatible (e.g require register fiddling) => replace by a generic
host bridge emulation for DOM0

Sounds good.


        - For physical host bridge that require initialization but is ECAM
compatible (e.g AFAICT xilinx [4]) => trap the ECAM access but let DOM0
handling the host bridge initialization

I would consider doing the initialization in Xen. It would simplify the
architecture significantly.

See above an example where it does not fit.

        - For all other host bridges => I don't know if there are host bridges
falling under this category. I also don't have any idea how to handle this.

Cheers,

[1] linux/drivers/pci/host/pci-xgene.c
[2] linux/drivers/pci/host/pcie-rcar.c
[3] https://lists.xen.org/archives/html/xen-devel/2016-11/msg00594.html

--
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.