[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v10 1/7] remove remaining uses of iommu_legacy_map/unmap



On 20.11.2020 14:24, Paul Durrant wrote:
> --- a/xen/arch/x86/mm.c
> +++ b/xen/arch/x86/mm.c
> @@ -2489,10 +2489,16 @@ static int cleanup_page_mappings(struct page_info 
> *page)
>  
>          if ( d && unlikely(need_iommu_pt_sync(d)) && is_pv_domain(d) )
>          {
> -            int rc2 = iommu_legacy_unmap(d, _dfn(mfn), 1u << PAGE_ORDER_4K);
> +            unsigned int flush_flags = 0;
> +            int err;
> +
> +            err = iommu_unmap(d, _dfn(mfn), 1ul << PAGE_ORDER_4K, 
> &flush_flags);
> +            if ( !err && !this_cpu(iommu_dont_flush_iotlb) )
> +                err = iommu_iotlb_flush(d, _dfn(mfn), 1ul << PAGE_ORDER_4K,
> +                                        flush_flags);

As was the subject of XSA-346, honoring on a path leading to
the freeing of a page _before_ the delayed flush actually
happens is wrong. Luckily the first of the two patches for
that XSA arranged for you never being able to observe the flag
set, so the check here is simply pointless. But it should
still be removed for documentation purposes.

> @@ -3014,14 +3020,20 @@ static int _get_page_type(struct page_info *page, 
> unsigned long type,
>          if ( d && unlikely(need_iommu_pt_sync(d)) && is_pv_domain(d) )
>          {
>              mfn_t mfn = page_to_mfn(page);
> +            dfn_t dfn = _dfn(mfn_x(mfn));
> +            unsigned int flush_flags = 0;
>  
>              if ( (x & PGT_type_mask) == PGT_writable_page )
> -                rc = iommu_legacy_unmap(d, _dfn(mfn_x(mfn)),
> -                                        1ul << PAGE_ORDER_4K);
> +                rc = iommu_unmap(d, dfn, 1ul << PAGE_ORDER_4K, &flush_flags);
>              else
> -                rc = iommu_legacy_map(d, _dfn(mfn_x(mfn)), mfn,
> -                                      1ul << PAGE_ORDER_4K,
> -                                      IOMMUF_readable | IOMMUF_writable);
> +            {
> +                rc = iommu_map(d, dfn, mfn, 1ul << PAGE_ORDER_4K,
> +                               IOMMUF_readable | IOMMUF_writable, 
> &flush_flags);
> +            }
> +
> +            if ( !rc && !this_cpu(iommu_dont_flush_iotlb) )
> +                rc = iommu_iotlb_flush(d, dfn, 1ul << PAGE_ORDER_4K,
> +                                       flush_flags);

Along these lines here - at least the unmapping needs to be
followed by a flush before the page can assume its new role.
Yet again I don't think the flag can ever be observed true
here, first and foremost because of the is_pv_domain() in
the surrounding if(). While the check could be made
conditional upon the prior operation having been a map, I
think it's again easier to simply delete the dead check.

> --- a/xen/arch/x86/mm/p2m-ept.c
> +++ b/xen/arch/x86/mm/p2m-ept.c
> @@ -842,15 +842,19 @@ out:
>      if ( rc == 0 && p2m_is_hostp2m(p2m) &&
>           need_modify_vtd_table )
>      {
> -        if ( iommu_use_hap_pt(d) && !this_cpu(iommu_dont_flush_iotlb) )
> -            rc = iommu_iotlb_flush(d, _dfn(gfn), 1ul << order,
> -                                   (iommu_flags ? IOMMU_FLUSHF_added : 0) |
> -                                   (vtd_pte_present ? IOMMU_FLUSHF_modified
> -                                                    : 0));
> -        else if ( need_iommu_pt_sync(d) )
> +        unsigned int flush_flags = 0;
> +
> +        if ( need_iommu_pt_sync(d) )
>              rc = iommu_flags ?
> -                iommu_legacy_map(d, _dfn(gfn), mfn, 1ul << order, 
> iommu_flags) :
> -                iommu_legacy_unmap(d, _dfn(gfn), 1ul << order);
> +                iommu_map(d, _dfn(gfn), mfn, 1ul << order, iommu_flags,
> +                          &flush_flags) :
> +                iommu_unmap(d, _dfn(gfn), 1ul << order, &flush_flags);
> +        else if ( iommu_use_hap_pt(d) )
> +            flush_flags = (iommu_flags ? IOMMU_FLUSHF_added : 0) |
> +                          (vtd_pte_present ? IOMMU_FLUSHF_modified : 0);

Is there a particular reason you inverted the order of the
iommu_use_hap_pt() and need_iommu_pt_sync() checks here?
The common (default) case for VT-x / VT-d / EPT is going to
be shared page tables, so I think this should remain the
path getting away with just one evaluation of a conditional.

> --- a/xen/common/memory.c
> +++ b/xen/common/memory.c
> @@ -836,8 +836,8 @@ int xenmem_add_to_physmap(struct domain *d, struct 
> xen_add_to_physmap *xatp,
>  
>      if ( is_iommu_enabled(d) )
>      {
> -       this_cpu(iommu_dont_flush_iotlb) = 1;
> -       extra.ppage = &pages[0];
> +        this_cpu(iommu_dont_flush_iotlb) = true;
> +        extra.ppage = &pages[0];
>      }

Is the respective part of the description ("no longer
pointlessly gated on is_iommu_enabled() returning true") stale?

> @@ -368,15 +360,12 @@ void iommu_dev_iotlb_flush_timeout(struct domain *d, 
> struct pci_dev *pdev);
>  
>  /*
>   * The purpose of the iommu_dont_flush_iotlb optional cpu flag is to
> - * avoid unecessary iotlb_flush in the low level IOMMU code.
> - *
> - * iommu_map_page/iommu_unmap_page must flush the iotlb but somethimes
> - * this operation can be really expensive. This flag will be set by the
> - * caller to notify the low level IOMMU code to avoid the iotlb flushes.
> - * iommu_iotlb_flush/iommu_iotlb_flush_all will be explicitly called by
> - * the caller.
> + * avoid unnecessary IOMMU flushing while updating the P2M.
> + * Setting the value to true will cause iommu_iotlb_flush() to return without
> + * actually performing a flush. A batch flush must therefore be done by the
> + * calling code after setting the value back to false.

I guess this too was in need of updating with the v9 changes?

Jan



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.