[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v7 6/6] x86/iommu: add map-reserved dom0-iommu option to map reserved memory ranges



Hi Roger,

On 22/08/18 08:52, Roger Pau Monne wrote:
Several people have reported hardware issues (malfunctioning USB
controllers) due to iommu page faults on Intel hardware. Those faults
are caused by missing RMRR (VTd) entries in the ACPI tables. Those can
be worked around on VTd hardware by manually adding RMRR entries on
the command line, this is however limited to Intel hardware and quite
cumbersome to do.

In order to solve those issues add a new dom0-iommu=map-reserved
option that identity maps all regions marked as reserved in the memory
map. Note that regions used by devices emulated by Xen (LAPIC, IO-APIC
or PCIe MCFG regions) are specifically avoided. Note that this option
is available to all Dom0 modes (as opposed to the inclusive option
which only works for PV Dom0).

Signed-off-by: Roger Pau Monné <roger.pau@xxxxxxxxxx>
Reviewed-by: Kevin Tian <kevin.tian@xxxxxxxxx>
---
Changes since v6:
  - Reword the map-reserved help to make it clear it's available to
    both PV and PVH Dom0.
  - Assign type inside of the switch expression.
  - Remove the comment about IO-APIC MMIO relocation, this is not
    supported ATM.

Changes since v5:
  - Merge with the vpci MMCFG helper patch.
  - Add a TODO item about the issues with relocating the LAPIC or
    IOAPIC MMIO regions.
  - Use the newly introduced page_get_ram_type that returns all the
    types that fall between a page.
  - Use paging_mode_translate instead of iommu_use_hap_pt when deciding
    whether to use set_identity_p2m_entry or iommu_map_page.

Changes since v4:
  - Use pfn_to_paddr.
  - Rebase on top of previous changes.
  - Change the default option setting to use if instead of a ternary
    operator.
  - Rename to map-reserved.

Changes since v3:
  - Add mappings if the iommu page tables are shared.

Changes since v2:
  - Fix comment regarding dom0-strict.
  - Change documentation style of xen command line.
  - Rename iommu_map to hwdom_iommu_map.
  - Move all the checks to hwdom_iommu_map.

Changes since v1:
  - Introduce a new reserved option instead of abusing the inclusive
    option.
  - Use the same helper function for PV and PVH in order to decide if a
    page should be added to the domain page tables.
  - Use the data inside of the domain struct to detect overlaps with
    emulated MMIO regions.
---
Cc: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
Cc: George Dunlap <George.Dunlap@xxxxxxxxxxxxx>
Cc: Ian Jackson <ian.jackson@xxxxxxxxxxxxx>
Cc: Jan Beulich <jbeulich@xxxxxxxx>
Cc: Julien Grall <julien.grall@xxxxxxx>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx>
Cc: Stefano Stabellini <sstabellini@xxxxxxxxxx>
Cc: Tim Deegan <tim@xxxxxxx>
Cc: Wei Liu <wei.liu2@xxxxxxxxxx>
Cc: Paul Durrant <paul.durrant@xxxxxxxxxx>
Cc: Suravee Suthikulpanit <suravee.suthikulpanit@xxxxxxx>
Cc: Brian Woods <brian.woods@xxxxxxx>
Cc: Kevin Tian <kevin.tian@xxxxxxxxx>
---
  docs/misc/xen-command-line.markdown         |  9 ++++
  xen/arch/x86/hvm/io.c                       |  5 ++
  xen/drivers/passthrough/amd/pci_amd_iommu.c |  3 ++
  xen/drivers/passthrough/arm/smmu.c          |  1 +
  xen/drivers/passthrough/iommu.c             |  3 ++
  xen/drivers/passthrough/vtd/iommu.c         |  3 ++
  xen/drivers/passthrough/x86/iommu.c         | 53 ++++++++++++++++++---
  xen/include/asm-x86/hvm/io.h                |  3 ++
  xen/include/xen/iommu.h                     |  2 +-
  9 files changed, 75 insertions(+), 7 deletions(-)

diff --git a/docs/misc/xen-command-line.markdown 
b/docs/misc/xen-command-line.markdown
index 98f0f3b68b..1ffd586224 100644
--- a/docs/misc/xen-command-line.markdown
+++ b/docs/misc/xen-command-line.markdown
@@ -704,6 +704,15 @@ This list of booleans controls the iommu usage by Dom0:
    option is only applicable to a PV Dom0 and is enabled by default on Intel
    hardware.
+* `map-reserved`: sets up DMA remapping for all the reserved regions in the
+  memory map for Dom0. Use this to work around firmware issues providing
+  incorrect RMRR/IVMD entries. Rather than only mapping RAM pages for IOMMU
+  accesses for Dom0, all memory regions marked as reserved in the memory map
+  that don't overlap with any MMIO region from emulated devices will be
+  identity mapped. This option maps a subset of the memory that would be
+  mapped when using the `map-inclusive` option. This option is available to all
+  Dom0 modes and is enabled by default on Intel hardware.
+
  ### dom0\_ioports\_disable (x86)
  > `= List of <hex>-<hex>`
diff --git a/xen/arch/x86/hvm/io.c b/xen/arch/x86/hvm/io.c
index bf4d8748d3..1f8fe36168 100644
--- a/xen/arch/x86/hvm/io.c
+++ b/xen/arch/x86/hvm/io.c
@@ -404,6 +404,11 @@ static const struct hvm_mmcfg *vpci_mmcfg_find(const 
struct domain *d,
      return NULL;
  }
+bool vpci_is_mmcfg_address(const struct domain *d, paddr_t addr)
+{
+    return vpci_mmcfg_find(d, addr);
+}
+
  static unsigned int vpci_mmcfg_decode_addr(const struct hvm_mmcfg *mmcfg,
                                             paddr_t addr, pci_sbdf_t *sbdf)
  {
diff --git a/xen/drivers/passthrough/amd/pci_amd_iommu.c 
b/xen/drivers/passthrough/amd/pci_amd_iommu.c
index 27eb49619d..49d934e1ac 100644
--- a/xen/drivers/passthrough/amd/pci_amd_iommu.c
+++ b/xen/drivers/passthrough/amd/pci_amd_iommu.c
@@ -256,6 +256,9 @@ static void __hwdom_init amd_iommu_hwdom_init(struct domain 
*d)
      /* Inclusive IOMMU mappings are disabled by default on AMD hardware. */
      if ( iommu_hwdom_inclusive == -1 )
          iommu_hwdom_inclusive = false;
+    /* Reserved IOMMU mappings are disabled by default on AMD hardware. */
+    if ( iommu_hwdom_reserved == -1 )
+        iommu_hwdom_reserved = false;

Same as patch #1, you are mixing boolean and integer.

Cheers,

--
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.