[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] Ping: [PATCH v2] AMD/IOMMU: fix off-by-one in amd_iommu_get_paging_mode() callers



On 03.03.2020 12:02, Jan Beulich wrote:
> amd_iommu_get_paging_mode() expects a count, not a "maximum possible"
> value. Prior to b4f042236ae0 dropping the reference, the use of our mis-
> named "max_page" in amd_iommu_domain_init() may have lead to such a
> misunderstanding. In an attempt to avoid such confusion in the future,
> rename the function's parameter and - while at it - convert it to an
> inline function.
> 
> Also replace a literal 4 by an expression tying it to a wider use
> constant, just like amd_iommu_quarantine_init() does.
> 
> Fixes: ea38867831da ("x86 / iommu: set up a scratch page in the quarantine 
> domain")
> Fixes: b4f042236ae0 ("AMD/IOMMU: Cease using a dynamic height for the IOMMU 
> pagetables")
> Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
> ---
> v2: Convert amd_iommu_get_paging_mode() itself to inline function,
>     changing itss parameter's name.

Ping? Anything else needed here, beyond addressing your v1 comments?

Thanks, Jan

> ---
> Note: I'm not at the same time adding error checking here, despite
>       amd_iommu_get_paging_mode() possibly returning one, as I think
>       that's a sufficiently orthogonal aspect.
> 
> --- a/xen/drivers/passthrough/amd/iommu.h
> +++ b/xen/drivers/passthrough/amd/iommu.h
> @@ -218,7 +218,6 @@ int amd_iommu_init_late(void);
>  int amd_iommu_update_ivrs_mapping_acpi(void);
>  int iov_adjust_irq_affinities(void);
>  
> -int amd_iommu_get_paging_mode(unsigned long entries);
>  int amd_iommu_quarantine_init(struct domain *d);
>  
>  /* mapping functions */
> @@ -341,6 +340,22 @@ static inline unsigned long region_to_pa
>      return (PAGE_ALIGN(addr + size) - (addr & PAGE_MASK)) >> PAGE_SHIFT;
>  }
>  
> +static inline int amd_iommu_get_paging_mode(unsigned long max_frames)
> +{
> +    int level = 1;
> +
> +    BUG_ON(!max_frames);
> +
> +    while ( max_frames > PTE_PER_TABLE_SIZE )
> +    {
> +        max_frames = PTE_PER_TABLE_ALIGN(max_frames) >> PTE_PER_TABLE_SHIFT;
> +        if ( ++level > 6 )
> +            return -ENOMEM;
> +    }
> +
> +    return level;
> +}
> +
>  static inline struct page_info *alloc_amd_iommu_pgtable(void)
>  {
>      struct page_info *pg = alloc_domheap_page(NULL, 0);
> --- a/xen/drivers/passthrough/amd/iommu_map.c
> +++ b/xen/drivers/passthrough/amd/iommu_map.c
> @@ -445,9 +445,9 @@ int amd_iommu_reserve_domain_unity_map(s
>  int __init amd_iommu_quarantine_init(struct domain *d)
>  {
>      struct domain_iommu *hd = dom_iommu(d);
> -    unsigned long max_gfn =
> -        PFN_DOWN((1ul << DEFAULT_DOMAIN_ADDRESS_WIDTH) - 1);
> -    unsigned int level = amd_iommu_get_paging_mode(max_gfn);
> +    unsigned long end_gfn =
> +        1ul << (DEFAULT_DOMAIN_ADDRESS_WIDTH - PAGE_SHIFT);
> +    unsigned int level = amd_iommu_get_paging_mode(end_gfn);
>      struct amd_iommu_pte *table;
>  
>      if ( hd->arch.root_table )
> --- a/xen/drivers/passthrough/amd/pci_amd_iommu.c
> +++ b/xen/drivers/passthrough/amd/pci_amd_iommu.c
> @@ -228,22 +228,6 @@ static int __must_check allocate_domain_
>      return rc;
>  }
>  
> -int amd_iommu_get_paging_mode(unsigned long entries)
> -{
> -    int level = 1;
> -
> -    BUG_ON( !entries );
> -
> -    while ( entries > PTE_PER_TABLE_SIZE )
> -    {
> -        entries = PTE_PER_TABLE_ALIGN(entries) >> PTE_PER_TABLE_SHIFT;
> -        if ( ++level > 6 )
> -            return -ENOMEM;
> -    }
> -
> -    return level;
> -}
> -
>  static int amd_iommu_domain_init(struct domain *d)
>  {
>      struct domain_iommu *hd = dom_iommu(d);
> @@ -256,8 +240,10 @@ static int amd_iommu_domain_init(struct
>       *   physical address space we give it, but this isn't known yet so use 4
>       *   unilaterally.
>       */
> -    hd->arch.paging_mode = is_hvm_domain(d)
> -        ? 4 : amd_iommu_get_paging_mode(get_upper_mfn_bound());
> +    hd->arch.paging_mode = amd_iommu_get_paging_mode(
> +        is_hvm_domain(d)
> +        ? 1ul << (DEFAULT_DOMAIN_ADDRESS_WIDTH - PAGE_SHIFT)
> +        : get_upper_mfn_bound() + 1);
>  
>      return 0;
>  }
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxxxxxxxxx
> https://lists.xenproject.org/mailman/listinfo/xen-devel
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.