[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [RFC PATCH 4/6] libxl/arm: Reuse generic PCI-IOMMU bindings for virtio-pci devices
From: Oleksandr Tyshchenko <oleksandr_tyshchenko@xxxxxxxx> Use the same "xen-grant-dma" device concept for the PCI devices behind device-tree based PCI Host controller, but with one modification. Unlike for platform devices, we cannot use generic IOMMU bindings (iommus property), as we need to support more flexible configuration. The problem is that PCI devices under the single PCI Host controller may have the backends running in different Xen domains and thus have different endpoints ID (backend domains ID). Reuse generic PCI-IOMMU bindings (iommu-map/iommu-map-mask properties) which allows us to describe relationship between PCI devices and backend domains ID properly. Linux guest is already able to deal with generic PCI-IOMMU bindings (see Linux drivers/xen/grant-dma-ops.c for details). According to Linux: - Documentation/devicetree/bindings/pci/pci-iommu.txt - Documentation/devicetree/bindings/iommu/xen,grant-dma.yaml Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@xxxxxxxx> Signed-off-by: Sergiy Kibrik <Sergiy_Kibrik@xxxxxxxx> --- tools/libs/light/libxl_arm.c | 64 ++++++++++++++++++++++++++++++++++++ 1 file changed, 64 insertions(+) diff --git a/tools/libs/light/libxl_arm.c b/tools/libs/light/libxl_arm.c index df6cbbe756..03cf3e424b 100644 --- a/tools/libs/light/libxl_arm.c +++ b/tools/libs/light/libxl_arm.c @@ -1030,6 +1030,7 @@ static int make_vpci_node(libxl__gc *gc, void *fdt, } #define PCI_IRQ_MAP_MIN_STRIDE 8 +#define PCI_IOMMU_MAP_STRIDE 4 static int create_virtio_pci_irq_map(libxl__gc *gc, void *fdt, libxl_virtio_pci_host *host) @@ -1099,6 +1100,65 @@ static int create_virtio_pci_irq_map(libxl__gc *gc, void *fdt, return 0; } +/* XXX Consider reusing libxl__realloc() to avoid an extra loop */ +static int create_virtio_pci_iommu_map(libxl__gc *gc, void *fdt, + libxl_virtio_pci_host *host, + libxl_domain_config *d_config) +{ + uint32_t *full_iommu_map, *iommu_map; + unsigned int i, len, ntranslated = 0; + int res; + + for (i = 0; i < d_config->num_virtios; i++) { + libxl_device_virtio *virtio = &d_config->virtios[i]; + + if (libxl_defbool_val(virtio->grant_usage) && + virtio->transport == LIBXL_VIRTIO_TRANSPORT_PCI && + virtio->u.pci.host_id == host->id) { + ntranslated++; + } + } + + if (!ntranslated) + return 0; + + len = ntranslated * sizeof(uint32_t) * PCI_IOMMU_MAP_STRIDE; + full_iommu_map = libxl__malloc(gc, len); + iommu_map = full_iommu_map; + + /* See Linux Documentation/devicetree/bindings/pci/pci-iommu.txt */ + for (i = 0; i < d_config->num_virtios; i++) { + libxl_device_virtio *virtio = &d_config->virtios[i]; + + if (libxl_defbool_val(virtio->grant_usage) && + virtio->transport == LIBXL_VIRTIO_TRANSPORT_PCI && + virtio->u.pci.host_id == host->id) { + uint16_t bdf = (virtio->u.pci.bus << 8) | + (virtio->u.pci.dev << 3) | virtio->u.pci.func; + unsigned int j = 0; + + /* rid_base (1 cell) */ + iommu_map[j++] = cpu_to_fdt32(bdf); + + /* iommu_phandle (1 cell) */ + iommu_map[j++] = cpu_to_fdt32(GUEST_PHANDLE_IOMMU); + + /* iommu_base (1 cell) */ + iommu_map[j++] = cpu_to_fdt32(virtio->backend_domid); + + /* length (1 cell) */ + iommu_map[j++] = cpu_to_fdt32(1 << 3); + + iommu_map += PCI_IOMMU_MAP_STRIDE; + } + } + + res = fdt_property(fdt, "iommu-map", full_iommu_map, len); + if (res) return res; + + return 0; +} + /* TODO Consider reusing make_vpci_node() */ static int make_virtio_pci_node(libxl__gc *gc, void *fdt, libxl_virtio_pci_host *host, @@ -1147,6 +1207,10 @@ static int make_virtio_pci_node(libxl__gc *gc, void *fdt, res = create_virtio_pci_irq_map(gc, fdt, host); if (res) return res; + /* xen,grant-dma bindings */ + res = create_virtio_pci_iommu_map(gc, fdt, host, d_config); + if (res) return res; + res = fdt_end_node(fdt); if (res) return res; -- 2.25.1
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |