[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH for-4.6 0/5] xen: arm: Parse PCI DT nodes' ranges and interrupt-map



On 2/17/2015 6:31 PM, Suravee Suthikulanit wrote:
On 2/17/2015 4:35 PM, Suravee Suthikulanit wrote:
On 2/17/2015 7:50 AM, Andrew Cooper wrote:
On 17/02/15 13:43, Julien Grall wrote:
(CC Jan and Andrew)

Hi Suravee,

On 17/02/15 03:04, Suravee Suthikulanit wrote:
By the way, looking at the output of "xl dmesg", I saw the following
message:

(XEN) DOM0: PCI host bridge /smb/pcie@f0000000 ranges:
(XEN) DOM0:    IO 0xefff0000..0xefffffff -> 0x00000000
(XEN) DOM0:   MEM 0x40000000..0xbfffffff -> 0x40000000
(XEN) DOM0:   MEM 0x100000000..0x7fffffffff -> 0x100000000
(XEN) DOM0: pci-host-generic f0000000.pcie: PCI host bridge to bus
0000:00
(XEN) DOM0: pci_bus 0000:00: root bus resource [bus 00-7f]
(XEN) DOM0: pci_bus 0000:00: root bus resource [io  0x0000-0xffff]
(XEN) DOM0: pci_bus 0000:00: root bus resource [mem
0x40000000-0xbfffffff]
(XEN) DOM0: pci_bus 0000:00: root bus resource [mem
0x100000000-0x7fffffffff]
(XEN) DOM0: pci 0000:00:00.0: of_irq_parse_pci() failed with rc=-19
(XEN) do_physdev_op 16 cmd=25: not implemented yet
(XEN) do_physdev_op 16 cmd=15: not implemented yet
(XEN) DOM0: pci 0000:00:00.0: Failed to add - passthrough or MSI/MSI-X
might fail!
(XEN) DOM0: pci 0000:00:02.0: of_irq_parse_pci() failed with rc=-19
(XEN) do_physdev_op 16 cmd=15: not implemented yet
(XEN) DOM0: pci 0000:00:02.0: Failed to add - passthrough or MSI/MSI-X
might fail!
(XEN) do_physdev_op 16 cmd=15: not implemented yet
(XEN) DOM0: pci 0000:00:02.1: Failed to add - passthrough or MSI/MSI-X
might fail!
(XEN) do_physdev_op 16 cmd=15: not implemented yet
(XEN) DOM0: pci 0000:01:00.0: Failed to add - passthrough or MSI/MSI-X
might fail!
(XEN) DOM0: pci 0000:01:00.1: of_irq_parse_pci() failed with rc=-22
(XEN) do_physdev_op 16 cmd=15: not implemented yet
(XEN) DOM0: pci 0000:01:00.1: Failed to add - passthrough or MSI/MSI-X
might fail!

IIUC, This is because xen_add_device() failed, and it seems to be
related to some hyper call not implemented. Not sure what is "cmd=15".
Any ideas?
There is 2 commands not implemented in the log:
    * cmd 15: PHYSDEVOP_manage_pci_add
    * cmd 25: PHYSDEVOP_pci_device_add

Linux fallbacks on the former because the later is not implemented.

AFAICT, PHYSDEVOP_manage_pci_add should not be implemented for ARM
because it doesn't support segment. I suspect that it's kept for legacy
on older Xen x86. Maybe Jan or Andrew have more input on this?

It needs to be kept for backwards compatibility in x86.

All new code should use PHYSDEVOP_pci_device_add.

~Andrew


Ok, now that I look at the arch/arm/physdev.c, I don't think the code
for supporting any of the PHYSDEVOP_xxx is there.  That's probably why
Xen complains. In contrast, arch/x86/physdev.c has most PHYSDEVOP_xxx
already supported.

My question is, are we supposed to be adding code to put the support in
here?

Thanks,

Suravee.

My guess is yes, and that would mean we need to enable building
drivers/pci.c when building arm code, which then open up a can of worms
with re-factoring MSI support code from x86 and etc.

Suravee

Actually, that seems to be more related to the PCI pass-through devices. Isn't the Cavium guys already done that work to support their PCI device pass-through?

Anyways, at this point, I am able to generated Dom0 device tree with correct v2m node, and I can see Dom0 gicv2m driver probing and initializing correctly as it would on bare-metal.

# Snippet from /sys/firmware/fdt showing dom0 GIC node
        interrupt-controller {
                compatible = "arm,gic-400", "arm,cortex-a15-gic";
                #interrupt-cells = <0x3>;
                interrupt-controller;
                reg = <0x0 0xe1110000 0x0 0x1000 0x0 0xe112f000 0x0 0x2000>;
                phandle = <0x1>;
                #address-cells = <0x2>;
                #size-cells = <0x2>;
                ranges = <0x0 0x0 0x0 0xe1100000 0x0 0x100000>;

                v2m {
                        compatible = "arm,gic-v2m-frame";
                        msi-controller;
                        arm,msi-base-spi = <0x40>;
                        arm,msi-num-spis = <0x100>;
                        phandle = <0x5>;
                        reg = <0x0 0x80000 0x0 0x1000>;
                };
        };

linux:~ # dmesg | grep v2m
[    0.000000] GICv2m: Overriding V2M MSI_TYPER (base:64, num:256)
[    0.000000] GICv2m: Node v2m: range[0xe1180000:0xe1180fff], SPI[64:320]

So, during setting up v2m in hypervisor, I also call route_irq_to_guest() for the all SPIs used for MSI (i.e. 64-320 on Seattle), which will force the MSIs to Dom0. However, we would need to figure out how to detach and re-route certain interrupts to a specific DomU in case of passing through PCI devices in the future.

So, here's what I got:

linux:~ # cat /proc/interrupts
CPU0 CPU1 CPU2 CPU3 CPU4 CPU5 0: 0 0 0 0 0 0 xen-dyn-event xenbus 3: 19872 19872 19870 19861 19866 20034 GIC 27 arch_timer 4: 91 1 1 1 1 1 GIC 31 events 13: 2397 0 0 0 0 0 GIC 387 e0300000.sata 14: 0 0 0 0 0 0 GIC 389 e1000000.i2c 15: 0 0 0 0 0 0 GIC 362 pl022 16: 0 0 0 0 0 0 GIC 361 pl022 46: 90 0 0 0 0 0 xen-percpu-virq hvc_console 47: 200 0 0 0 0 0 MSI 524288 enp1s0f0-TxRx-0 48: 56 0 0 0 0 0 MSI 524289 enp1s0f0-TxRx-1 49: 47 0 0 0 0 0 MSI 524290 enp1s0f0-TxRx-2 50: 45 0 0 0 0 0 MSI 524291 enp1s0f0-TxRx-3 51: 53 0 0 0 0 0 MSI 524292 enp1s0f0-TxRx-4 52: 48 0 0 0 0 0 MSI 524293 enp1s0f0-TxRx-5 53: 4 0 0 0 0 0 MSI 524294 enp1s0f0

linux:~ # ifconfig enp1s0f0
enp1s0f0  Link encap:Ethernet  HWaddr 00:1B:21:55:7F:14
          inet addr:10.236.19.5  Bcast:10.236.19.255  Mask:255.255.254.0
          inet6 addr: fe80::21b:21ff:fe55:7f14/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:333 errors:0 dropped:0 overruns:0 frame:0
          TX packets:111 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:33406 (32.6 Kb)  TX bytes:14957 (14.6 Kb)

linux:~ # lspci -vvv -s 1:0.0
01:00.0 Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01)
        Subsystem: Intel Corporation Ethernet Server Adapter X520-2
Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+ Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
        Latency: 0
        Interrupt: pin A routed to IRQ 44
        Region 0: Memory at bfe80000 (64-bit, prefetchable) [size=512K]
        Region 2: I/O ports at 0020 [size=32]
        Region 4: Memory at bff04000 (64-bit, prefetchable) [size=16K]
        Capabilities: [40] Power Management version 3
Flags: PMEClk- DSI+ D1- D2- AuxCurrent=0mA PME(D0+,D1-,D2-,D3hot+,D3cold-)
                Status: D0 NoSoftRst- PME-Enable- DSel=0 DScale=1 PME-
        Capabilities: [50] MSI: Enable- Count=1/1 Maskable+ 64bit+
                Address: 0000000000000000  Data: 0000
                Masking: 00000000  Pending: 00000000
        Capabilities: [70] MSI-X: Enable+ Count=64 Masked-
                Vector table: BAR=4 offset=00000000
                PBA: BAR=4 offset=00002000
....

And there you have it.... GICv2m MSI(-X) supports in Dom 0 for Seattle ;) Thanks to Ian's PCI patch, which makes porting much simpler.

Next, I'll clean up the code and send out Xen patch for review. I'll also push Linux changes (for adding ARM64 PCI Generic host controller supports and MSI IRQ domain from Marc) into my Linux tree on Github. Then you could give it a try on your Seattle box.

Thanks,

Suravee


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.