[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] PCI Passthrough ARM Design : Draft1





On Wednesday 10 June 2015 12:21 PM, Julien Grall wrote:
Hi,

On 10/06/2015 08:45, Ian Campbell wrote:
4. DomU access / assignment PCI device
--------------------------------------
When a device is attached to a domU, provision has to be made such that
it can
access the MMIO space of the device and xen is able to identify the mapping
between guest bdf and system bdf. Two hypercalls are introduced

I don't think we want/need new hypercalls here, the same existing
hypercalls which are used on x86 should be suitable.
I think both the hypercalls are necessary
a) the mapping of guest bdf to actual sbdf is required as domU accesses for GIC are trapped and not handled by pciback. A device say 1:0:0.3 is assigned in domU at 0:0:0.3. This is the bestway I could find that works.

b) map_mmio call is issued just after the device is added on the pcu bus (in case of domU) The function register_xen_pci_notifier (drivers/xen/pci.c) is modified such that notification is received in domU and dom0.
That's
XEN_DOMCTL_memory_mapping from the toolstack I think.

XEN_DOMCTL_memory_mapping is done by QEMU for x86 HVM when the guest (i.e hvmloader?) is writing in the PCI BAR.

AFAIU, when the device is assigned to the guest, we don't know yet where the BAR will live in the guest memory. It will be assigned by the guest (I wasn't able to find if Linux is able to do it).

As the config space will trap in pciback, we would need to map the physical memory to the guest from the kernel. A domain


Xen adds the mmio space to the stage2 translation for domU. The
restrction is
that xen creates 1:1 mapping of the MMIO address.

I don't think we need/want this restriction. We can define some
region(s) of guest memory to be an MMIO hole (by adding them to to the
memory map in public/arch-arm.h).

Even if we decide to choose a 1:1 mapping, this should not be exposed in the hypervisor interface (see the suggested physdev_map_mmio) and let at the discretion of the toolstack domain.
check (b) above.

Beware that the 1:1 mapping doesn't fit with the current guest memory layout which is pre-defined at Xen build time. So you would also have to make it dynamically or decide to use the same memory layout as the host.
If same layout as host used, would there be any issue?

If there is a reason for this restriction/trade off then it should be
spelled out as part of the design document, as should other such design
decisions (which would include explaining where this differs from how
things work for x86 why they must differ).

On x86, for HVM the MMIO mapping is done by QEMU. I know that Roger is working on PCI passthrough for PVH. PVH is very similar to ARM guest and I expect to see a similar needs for MMIO mapping. It would be good if we can come up with a common interface.

Regards,



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.