[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [RFC XEN PATCH v1] xen/public: Add initial files for PV-IOMMU
Hello, I am introducing a proposal for a PV-IOMMU hypercall interface. Some operating systems want to use IOMMU to implement various features (e.g VFIO) or DMA protection. This proposal aims to provide to guests (notably Dom0) a way to access a paravirtualized one. This proposal is based on what presented on xcp-ng blog [1] with some notable changes : - it is now possible to specify a number of contigous pages to map/unmap, it replaces the "sub-operation count" parameter of the hypercall that allowed batching operations but I found it too complex and not really practical - it is now possible from the guest to query PV-IOMMU capabilities (max iova, maximum context number, max pages in a single operation) This patch includes a design document describing the main ideas and a public header of the interface to use. Teddy --- [1] https://xcp-ng.org/blog/2024/04/18/iommu-paravirtualization-for-xen/ Cc: Andrew Cooper <andrew.cooper3@xxxxxxxxxx> Cc: Roger Pau Monné <roger.pau@xxxxxxxxxx> Cc: Jan Beulich <jbeulich@xxxxxxxx> Cc: Bertrand Marquis <bertrand.marquis@xxxxxxx> Cc: Rahul Singh <rahul.singh@xxxxxxx> --- docs/designs/pv-iommu.md | 105 +++++++++++++++++++++++++++++++ xen/include/public/pv-iommu.h | 114 ++++++++++++++++++++++++++++++++++ 2 files changed, 219 insertions(+) create mode 100644 docs/designs/pv-iommu.md create mode 100644 xen/include/public/pv-iommu.h diff --git a/docs/designs/pv-iommu.md b/docs/designs/pv-iommu.md new file mode 100644 index 0000000000..c01062a3ad --- /dev/null +++ b/docs/designs/pv-iommu.md @@ -0,0 +1,105 @@ +# IOMMU paravirtualization for Dom0 + +Status: Experimental + +# Background + +By default, Xen only uses the IOMMU for itself, either to make device adress +space coherent with guest adress space (x86 HVM/PVH) or to prevent devices +from doing DMA outside it's expected memory regions including the hypervisor +(x86 PV). + +A limitation is that guests (especially privildged ones) may want to use +IOMMU hardware in order to implement features such as DMA protection and +VFIO [1] as IOMMU functionality is not available outside of the hypervisor +currently. + +[1] VFIO - "Virtual Function I/O" - https://www.kernel.org/doc/html/latest/driver-api/vfio.html + +# Design + +The operating system may want to have access to various IOMMU features such as +context management and DMA remapping. We can create a new hypercall that allows +the guest to have access to a new paravirtualized IOMMU interface. + +This feature is only meant to be available for the Dom0, as DomU have some +emulated devices that can't be managed on Xen side and are not hardware, we +can't rely on the hardware IOMMU to enforce DMA remapping. + +This interface is exposed under the `iommu_op` hypercall. + +In addition, Xen domains are modified in order to allow existence of several +IOMMU context including a default one that implement default behavior (e.g +hardware assisted paging) and can't be modified by guest. DomU cannot have +contexts, and therefore act as if they only have the default domain. + +Each IOMMU context within a Xen domain is identified using a domain-specific +context number that is used in the Xen IOMMU subsystem and the hypercall +interface. + +The number of IOMMU context a domain can use is predetermined at domain creation +and is configurable through `dom0-iommu=nb-ctx=N` xen cmdline. + +# IOMMU operations + +## Alloc context + +Create a new IOMMU context for the guest and return the context number to the +guest. +Fail if the IOMMU context limit of the guest is reached. + +A flag can be specified to create a identity mapping. + +## Free context + +Destroy a IOMMU context created previously. +It is not possible to free the default context. + +Reattach context devices to default context if specified by the guest. + +Fail if there is a device in the context and reattach-to-default flag is not +specified. + +## Reattach device + +Reattach a device to another IOMMU context (including the default one). +The target IOMMU context number must be valid and the context allocated. + +The guest needs to specify a PCI SBDF of a device he has access to. + +## Map/unmap page + +Map/unmap a page on a context. +The guest needs to specify a gfn and target dfn to map. + +Refuse to create the mapping if one already exist for the same dfn. + +## Lookup page + +Get the gfn mapped by a specific dfn. + +# Implementation considerations + +## Hypercall batching + +In order to prevent unneeded hypercalls and IOMMU flushing, it is advisable to +be able to batch some critical IOMMU operations (e.g map/unmap multiple pages). + +## Hardware without IOMMU support + +Operating system needs to be aware on PV-IOMMU capability, and whether it is +able to make contexts. However, some operating system may critically fail in +case they are able to make a new IOMMU context. Which is supposed to happen +if no IOMMU hardware is available. + +The hypercall interface needs a interface to advertise the ability to create +and manage IOMMU contexts including the amount of context the guest is able +to use. Using these informations, the Dom0 may decide whether to use or not +the PV-IOMMU interface. + +## Page pool for contexts + +In order to prevent unexpected starving on the hypervisor memory with a +buggy Dom0. We can preallocate the pages the contexts will use and make +map/unmap use these pages instead of allocating them dynamically. + diff --git a/xen/include/public/pv-iommu.h b/xen/include/public/pv-iommu.h new file mode 100644 index 0000000000..45f9c44eb1 --- /dev/null +++ b/xen/include/public/pv-iommu.h @@ -0,0 +1,114 @@ +/* SPDX-License-Identifier: MIT */ +/****************************************************************************** + * pv-iommu.h + * + * Paravirtualized IOMMU driver interface. + * + * Copyright (c) 2024 Teddy Astie <teddy.astie@xxxxxxxxxx> + */ + +#ifndef __XEN_PUBLIC_PV_IOMMU_H__ +#define __XEN_PUBLIC_PV_IOMMU_H__ + +#include "xen.h" +#include "physdev.h" + +#define IOMMU_DEFAULT_CONTEXT (0) + +/** + * Query PV-IOMMU capabilities for this domain. + */ +#define IOMMUOP_query_capabilities 1 + +/** + * Allocate an IOMMU context, the new context handle will be written to ctx_no. + */ +#define IOMMUOP_alloc_context 2 + +/** + * Destroy a IOMMU context. + * All devices attached to this context are reattached to default context. + * + * The default context can't be destroyed (0). + */ +#define IOMMUOP_free_context 3 + +/** + * Reattach the device to IOMMU context. + */ +#define IOMMUOP_reattach_device 4 + +#define IOMMUOP_map_pages 5 +#define IOMMUOP_unmap_pages 6 + +/** + * Get the GFN associated to a specific DFN. + */ +#define IOMMUOP_lookup_page 7 + +struct pv_iommu_op { + uint16_t subop_id; + uint16_t ctx_no; + +/** + * Create a context that is cloned from default. + * The new context will be populated with 1:1 mappings covering the entire guest memory. + */ +#define IOMMU_CREATE_clone (1 << 0) + +#define IOMMU_OP_readable (1 << 0) +#define IOMMU_OP_writeable (1 << 1) + uint32_t flags; + + union { + struct { + uint64_t gfn; + uint64_t dfn; + /* Number of pages to map */ + uint32_t nr_pages; + /* Number of pages actually mapped after sub-op */ + uint32_t mapped; + } map_pages; + + struct { + uint64_t dfn; + /* Number of pages to unmap */ + uint32_t nr_pages; + /* Number of pages actually unmapped after sub-op */ + uint32_t unmapped; + } unmap_pages; + + struct { + struct physdev_pci_device dev; + } reattach_device; + + struct { + uint64_t gfn; + uint64_t dfn; + } lookup_page; + + struct { + /* Maximum number of IOMMU context this domain can use. */ + uint16_t max_ctx_no; + /* Maximum number of pages that can be modified in a single map/unmap operation. */ + uint32_t max_nr_pages; + /* Maximum device address (iova) that the guest can use for mappings. */ + uint64_t max_iova_addr; + } cap; + }; +}; + +typedef struct pv_iommu_op pv_iommu_op_t; +DEFINE_XEN_GUEST_HANDLE(pv_iommu_op_t); + +#endif + +/* + * Local variables: + * mode: C + * c-file-style: "BSD" + * c-basic-offset: 4 + * tab-width: 4 + * indent-tabs-mode: nil + * End: + */ \ No newline at end of file -- 2.44.0 Teddy Astie | Vates XCP-ng Intern XCP-ng & Xen Orchestra - Vates solutions web: https://vates.tech
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |