[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] amd/iommu: remove hidden AMD inclusive mappings



> -----Original Message-----
> From: Roger Pau Monne [mailto:roger.pau@xxxxxxxxxx]
> Sent: 21 September 2018 16:21
> To: xen-devel@xxxxxxxxxxxxxxxxxxxx
> Cc: Roger Pau Monne <roger.pau@xxxxxxxxxx>; Suravee Suthikulpanit
> <suravee.suthikulpanit@xxxxxxx>; Brian Woods <brian.woods@xxxxxxx>; Kevin
> Tian <kevin.tian@xxxxxxxxx>; Jan Beulich <jbeulich@xxxxxxxx>; Paul Durrant
> <Paul.Durrant@xxxxxxxxxx>
> Subject: [PATCH] amd/iommu: remove hidden AMD inclusive mappings
> 
> And just rely on arch_iommu_hwdom_init to setup the correct inclusive
> mappings as it's done for Intel.
> 
> AMD has code in amd_iommu_hwdom_init to setup inclusive mappings up to
> max_pdx, remove this since it's now a duplication of
> arch_iommu_hwdom_init. Note that AMD mapped every page with a valid
> mfn up to max_pdx, arch_iommu_hwdom_init will only do so for memory
> below 4GB, so this is a functional change for AMD.
> 
> Move the default setting of iommu_hwdom_{inclusive/reserved} to
> arch_iommu_hwdom_init since the defaults are now the same for both
> Intel and AMD.
> 

IMO the functional change and consequent alignment of the defaults is a good 
thing.

> Reported-by: Paul Durrant <paul.durrant@xxxxxxxxxx>
> Signed-off-by: Roger Pau Monné <roger.pau@xxxxxxxxxx>

Reviewed-by: Paul Durrant <paul.durrant@xxxxxxxxxx>

> ---
> Cc: Suravee Suthikulpanit <suravee.suthikulpanit@xxxxxxx>
> Cc: Brian Woods <brian.woods@xxxxxxx>
> Cc: Kevin Tian <kevin.tian@xxxxxxxxx>
> Cc: Jan Beulich <jbeulich@xxxxxxxx>
> Cc: Paul Durrant <paul.durrant@xxxxxxxxxx>
> ---
>  xen/drivers/passthrough/amd/pci_amd_iommu.c | 39 ---------------------
>  xen/drivers/passthrough/vtd/iommu.c         |  7 ----
>  xen/drivers/passthrough/x86/iommu.c         |  8 ++++-
>  3 files changed, 7 insertions(+), 47 deletions(-)
> 
> diff --git a/xen/drivers/passthrough/amd/pci_amd_iommu.c
> b/xen/drivers/passthrough/amd/pci_amd_iommu.c
> index 4a633ca940..821fe03df5 100644
> --- a/xen/drivers/passthrough/amd/pci_amd_iommu.c
> +++ b/xen/drivers/passthrough/amd/pci_amd_iommu.c
> @@ -250,50 +250,11 @@ static int amd_iommu_add_device(u8 devfn, struct
> pci_dev *pdev);
> 
>  static void __hwdom_init amd_iommu_hwdom_init(struct domain *d)
>  {
> -    unsigned long i;
>      const struct amd_iommu *iommu;
> 
> -    /* Inclusive IOMMU mappings are disabled by default on AMD hardware.
> */
> -    if ( iommu_hwdom_inclusive == -1 )
> -        iommu_hwdom_inclusive = 0;
> -    /* Reserved IOMMU mappings are disabled by default on AMD hardware.
> */
> -    if ( iommu_hwdom_reserved == -1 )
> -        iommu_hwdom_reserved = 0;
> -
>      if ( allocate_domain_resources(dom_iommu(d)) )
>          BUG();
> 
> -    if ( !iommu_hwdom_passthrough && !need_iommu(d) )
> -    {
> -        int rc = 0;
> -
> -        /* Set up 1:1 page table for dom0 */
> -        for ( i = 0; i < max_pdx; i++ )
> -        {
> -            unsigned long pfn = pdx_to_pfn(i);
> -
> -            /*
> -             * XXX Should we really map all non-RAM (above 4G)? Minimally
> -             * a pfn_valid() check would seem desirable here.
> -             */
> -            if ( mfn_valid(_mfn(pfn)) )
> -            {
> -                int ret = amd_iommu_map_page(d, pfn, pfn,
> -
> IOMMUF_readable|IOMMUF_writable);
> -
> -                if ( !rc )
> -                    rc = ret;
> -            }
> -
> -            if ( !(i & 0xfffff) )
> -                process_pending_softirqs();
> -        }
> -
> -        if ( rc )
> -            AMD_IOMMU_DEBUG("d%d: IOMMU mapping failed: %d\n",
> -                            d->domain_id, rc);
> -    }
> -
>      for_each_amd_iommu ( iommu )
>          if ( iomem_deny_access(d, PFN_DOWN(iommu->mmio_base_phys),
>                                 PFN_DOWN(iommu->mmio_base_phys +
> diff --git a/xen/drivers/passthrough/vtd/iommu.c
> b/xen/drivers/passthrough/vtd/iommu.c
> index bb422ec58c..cf8a80d7a1 100644
> --- a/xen/drivers/passthrough/vtd/iommu.c
> +++ b/xen/drivers/passthrough/vtd/iommu.c
> @@ -1304,13 +1304,6 @@ static void __hwdom_init
> intel_iommu_hwdom_init(struct domain *d)
>  {
>      struct acpi_drhd_unit *drhd;
> 
> -    /* Inclusive mappings are enabled by default on Intel hardware for
> PV. */
> -    if ( iommu_hwdom_inclusive == -1 )
> -        iommu_hwdom_inclusive = is_pv_domain(d);
> -    /* Reserved IOMMU mappings are enabled by default on Intel hardware.
> */
> -    if ( iommu_hwdom_reserved == -1 )
> -        iommu_hwdom_reserved = 1;
> -
>      setup_hwdom_pci_devices(d, setup_hwdom_device);
>      setup_hwdom_rmrr(d);
>      /* Make sure workarounds are applied before enabling the IOMMU(s). */
> diff --git a/xen/drivers/passthrough/x86/iommu.c
> b/xen/drivers/passthrough/x86/iommu.c
> index b7c8b5be41..2de8822c59 100644
> --- a/xen/drivers/passthrough/x86/iommu.c
> +++ b/xen/drivers/passthrough/x86/iommu.c
> @@ -210,7 +210,13 @@ void __hwdom_init arch_iommu_hwdom_init(struct domain
> *d)
> 
>      BUG_ON(!is_hardware_domain(d));
> 
> -    ASSERT(iommu_hwdom_inclusive != -1 && iommu_hwdom_inclusive != -1);
> +    /* Inclusive mappings are enabled by default for PV. */
> +    if ( iommu_hwdom_inclusive == -1 )
> +        iommu_hwdom_inclusive = is_pv_domain(d);
> +    /* Reserved IOMMU mappings are enabled by default. */
> +    if ( iommu_hwdom_reserved == -1 )
> +        iommu_hwdom_reserved = 1;
> +
>      if ( iommu_hwdom_inclusive && !is_pv_domain(d) )
>      {
>          printk(XENLOG_WARNING
> --
> 2.19.0

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.