[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 4/4] x86/iommu: add PVH support to the inclusive options



> -----Original Message-----
> From: Xen-devel [mailto:xen-devel-bounces@xxxxxxxxxxxxxxxxxxxx] On Behalf
> Of Roger Pau Monne
> Sent: 27 July 2018 16:32
> To: xen-devel@xxxxxxxxxxxxxxxxxxxx
> Cc: Stefano Stabellini <sstabellini@xxxxxxxxxx>; Wei Liu
> <wei.liu2@xxxxxxxxxx>; George Dunlap <George.Dunlap@xxxxxxxxxx>;
> Andrew Cooper <Andrew.Cooper3@xxxxxxxxxx>; Ian Jackson
> <Ian.Jackson@xxxxxxxxxx>; Tim (Xen.org) <tim@xxxxxxx>; Julien Grall
> <julien.grall@xxxxxxx>; Jan Beulich <jbeulich@xxxxxxxx>; Roger Pau
> Monne <roger.pau@xxxxxxxxxx>
> Subject: [Xen-devel] [PATCH 4/4] x86/iommu: add PVH support to the
> inclusive options
> 
> Several people have reported hardware issues (malfunctioning USB
> controllers) due to iommu page faults. Those faults are caused by
> missing RMRR (VTd) or IRVS (AMD-Vi) entries in the ACPI tables. Those
> can be worked around on VTd hardware by manually adding RMRR entries
> on the command line, this is however limited to Intel hardware and
> quite cumbersome to do.
> 
> In order to solve those issues add PVH support to the inclusive option
> that identity maps all regions marked as reserved in the memory map.
> Note that regions used by devices emulated by Xen (LAPIC, IO-APIC or
> PCIe MCFG regions) are specifically avoided. Note that this option
> currently relies on no MSIX MMIO areas residing in a reserved region,
> or else Xen won't be able to trap those accesses.
> 
> Signed-off-by: Roger Pau Monné <roger.pau@xxxxxxxxxx>
> ---
> Cc: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
> Cc: George Dunlap <George.Dunlap@xxxxxxxxxxxxx>
> Cc: Ian Jackson <ian.jackson@xxxxxxxxxxxxx>
> Cc: Jan Beulich <jbeulich@xxxxxxxx>
> Cc: Julien Grall <julien.grall@xxxxxxx>
> Cc: Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx>
> Cc: Stefano Stabellini <sstabellini@xxxxxxxxxx>
> Cc: Tim Deegan <tim@xxxxxxx>
> Cc: Wei Liu <wei.liu2@xxxxxxxxxx>
> ---
>  docs/misc/xen-command-line.markdown | 16 ++++--
>  xen/drivers/passthrough/x86/iommu.c | 82 +++++++++++++++++++++++--
> ----
>  2 files changed, 77 insertions(+), 21 deletions(-)
> 
> diff --git a/docs/misc/xen-command-line.markdown b/docs/misc/xen-
> command-line.markdown
> index 91a8bfc9a6..c7c9a38c19 100644
> --- a/docs/misc/xen-command-line.markdown
> +++ b/docs/misc/xen-command-line.markdown
> @@ -1203,11 +1203,17 @@ detection of systems known to misbehave upon
> accesses to that port.
>  > Default: `true`
> 
>  >> Use this to work around firmware issues providing incorrect RMRR or
> IVMD
> ->> entries. Rather than only mapping RAM pages for IOMMU accesses for
> Dom0,
> ->> with this option all pages up to 4GB, not marked as unusable in the E820
> ->> table, will get a mapping established. Note that this option is only
> ->> applicable to a PV dom0. Also note that if `dom0-strict` mode is enabled
> ->> then conventional RAM pages not assigned to dom0 will not be mapped.
> +>> entries. The behaviour of this option is slightly different between a PV
> and
> +>> a PVH Dom0:
> +>>
> +>> * For a PV Dom0 all pages up to 4GB not marked as unusable in the
> memory
> +>>   map will get a mapping established. Note that if `dom0-strict` mode is
> +>>   enabled then conventional RAM pages not assigned to dom0 will not be
> +>>   mapped.
> +>>
> +>> * For a PVH Dom0 all memory regions marked as reserved in the
> memory map
> +>>   that don't overlap with any MMIO region from emulated devices will be
> +>>   identity mapped.
> 
>  ### iommu\_dev\_iotlb\_timeout
>  > `= <integer>`
> diff --git a/xen/drivers/passthrough/x86/iommu.c
> b/xen/drivers/passthrough/x86/iommu.c
> index 24cc591aa5..cfafe1b572 100644
> --- a/xen/drivers/passthrough/x86/iommu.c
> +++ b/xen/drivers/passthrough/x86/iommu.c
> @@ -20,6 +20,8 @@
>  #include <xen/softirq.h>
>  #include <xsm/xsm.h>
> 
> +#include <asm/apicdef.h>
> +#include <asm/io_apic.h>
>  #include <asm/setup.h>
> 
>  void iommu_update_ire_from_apic(
> @@ -134,11 +136,62 @@ void arch_iommu_domain_destroy(struct domain
> *d)
>  {
>  }
> 
> +static bool __hwdom_init pv_inclusive_map(unsigned long pfn,
> +                                          unsigned long max_pfn)

Perhaps pv_hwdom_inclusive_map() (and similarly pvh_hwdom_inclusive_map()) to 
make it obvious that they are intended only for the hardware domain. (I know 
the annotation makes this reasonably obvious but other hwdom-specific functions 
seem to carry this in their name).

  Paul

> +{
> +    /*
> +     * If dom0-strict mode is enabled then exclude conventional RAM
> +     * and let the common code map dom0's pages.
> +     */
> +    if ( iommu_dom0_strict && page_is_ram_type(pfn,
> RAM_TYPE_CONVENTIONAL) )
> +        return false;
> +    if ( iommu_inclusive && pfn <= max_pfn )
> +        return !page_is_ram_type(pfn, RAM_TYPE_UNUSABLE);
> +
> +    return page_is_ram_type(pfn, RAM_TYPE_CONVENTIONAL);
> +}
> +
> +static bool __hwdom_init pvh_inclusive_map(const struct domain *d,
> +                                           unsigned long pfn)
> +{
> +    unsigned int i;
> +
> +    /*
> +     * Ignore any address below 1MB, that's already identity mapped by the
> +     * domain builder.
> +     */
> +    if ( pfn < PFN_DOWN(MB(1)) )
> +        return false;
> +
> +    /* Only add reserved regions. */
> +    if ( !page_is_ram_type(pfn, RAM_TYPE_RESERVED) )
> +        return false;
> +
> +    /* Check that it doesn't overlap with the LAPIC */
> +    if ( pfn == PFN_DOWN(APIC_DEFAULT_PHYS_BASE) )
> +        return false;
> +    /* ... or the IO-APIC */
> +    for ( i = 0; i < nr_ioapics; i++ )
> +        if ( pfn == PFN_DOWN(domain_vioapic(d, i)->base_address) )
> +            return false;
> +    /* ... or the PCIe MCFG regions. */
> +    for ( i = 0; i < pci_mmcfg_config_num; i++ )
> +    {
> +        unsigned long addr = PFN_DOWN(pci_mmcfg_config[i].address);
> +
> +        if ( pfn >= addr + (pci_mmcfg_config[i].start_bus_number << 8) &&
> +             pfn < addr + (pci_mmcfg_config[i].end_bus_number << 8) )
> +            return false;
> +    }
> +
> +    return true;
> +}
> +
>  void __hwdom_init arch_iommu_hwdom_init(struct domain *d)
>  {
>      unsigned long i, j, tmp, top, max_pfn;
> 
> -    if ( iommu_passthrough || !is_pv_domain(d) )
> +    if ( iommu_passthrough )
>          return;
> 
>      BUG_ON(!is_hardware_domain(d));
> @@ -149,7 +202,6 @@ void __hwdom_init arch_iommu_hwdom_init(struct
> domain *d)
>      for ( i = 0; i < top; i++ )
>      {
>          unsigned long pfn = pdx_to_pfn(i);
> -        bool map;
>          int rc = 0;
> 
>          /*
> @@ -163,25 +215,23 @@ void __hwdom_init
> arch_iommu_hwdom_init(struct domain *d)
>               xen_in_range(pfn) )
>              continue;
> 
> -        /*
> -         * If dom0-strict mode is enabled then exclude conventional RAM
> -         * and let the common code map dom0's pages.
> -         */
> -        if ( iommu_dom0_strict &&
> -             page_is_ram_type(pfn, RAM_TYPE_CONVENTIONAL) )
> -            map = false;
> -        else if ( iommu_inclusive && pfn <= max_pfn )
> -            map = !page_is_ram_type(pfn, RAM_TYPE_UNUSABLE);
> -        else
> -            map = page_is_ram_type(pfn, RAM_TYPE_CONVENTIONAL);
> -
> -        if ( !map )
> +        if ( is_pv_domain(d) ? !pv_inclusive_map(pfn, max_pfn)
> +                             : !pvh_inclusive_map(d, pfn) )
>              continue;
> 
>          tmp = 1 << (PAGE_SHIFT - PAGE_SHIFT_4K);
>          for ( j = 0; j < tmp; j++ )
>          {
> -            int ret = iommu_map_page(d, pfn * tmp + j, pfn * tmp + j,
> +            int ret;
> +
> +            if ( iommu_use_hap_pt(d) )
> +            {
> +                ASSERT(is_hvm_domain(d));
> +                ret = set_identity_p2m_entry(d, pfn * tmp + j, p2m_access_rw,
> +                                             0);
> +            }
> +            else
> +                ret = iommu_map_page(d, pfn * tmp + j, pfn * tmp + j,
>                                       IOMMUF_readable|IOMMUF_writable);
> 
>              if ( !rc )
> --
> 2.18.0
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxxxxxxxxx
> https://lists.xenproject.org/mailman/listinfo/xen-devel
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.