[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v3 3/4] iommu: elide flushing for higher order map/unmap operations
>>> On 05.12.18 at 12:29, <paul.durrant@xxxxxxxxxx> wrote: > --- a/xen/common/memory.c > +++ b/xen/common/memory.c > @@ -865,11 +865,15 @@ int xenmem_add_to_physmap(struct domain *d, struct > xen_add_to_physmap *xatp, > > this_cpu(iommu_dont_flush_iotlb) = 0; > > - ret = iommu_flush(d, _dfn(xatp->idx - done), done); > + ret = iommu_iotlb_flush(d, _dfn(xatp->idx - done), done, > + IOMMU_FLUSHF_added | > + IOMMU_FLUSHF_modified); No need to split these last two lines afaict, nor ... > if ( unlikely(ret) && rc >= 0 ) > rc = ret; > > - ret = iommu_flush(d, _dfn(xatp->gpfn - done), done); > + ret = iommu_iotlb_flush(d, _dfn(xatp->gpfn - done), done, > + IOMMU_FLUSHF_added | > + IOMMU_FLUSHF_modified); ... these. > @@ -573,18 +589,17 @@ int amd_iommu_map_page(struct domain *d, dfn_t dfn, > mfn_t mfn, > } > > /* Install 4k mapping */ > - need_flush = set_iommu_pte_present(pt_mfn[1], dfn_x(dfn), mfn_x(mfn), 1, > - !!(flags & IOMMUF_writable), > - !!(flags & IOMMUF_readable)); > - > - if ( need_flush ) > - amd_iommu_flush_pages(d, dfn_x(dfn), 0); > + *flush_flags |= set_iommu_pte_present(pt_mfn[1], dfn_x(dfn), mfn_x(mfn), > + 1, !!(flags & IOMMUF_writable), > + !!(flags & IOMMUF_readable)); I don't think the !! here need retaining. > @@ -235,6 +236,10 @@ void __hwdom_init iommu_hwdom_init(struct domain *d) > process_pending_softirqs(); > } > > + /* Use while-break to avoid compiler warning */ > + while ( !iommu_iotlb_flush_all(d, flush_flags) ) > + break; With just the "break;" as body, what's the ! good for? > @@ -320,7 +326,8 @@ int iommu_legacy_map(struct domain *d, dfn_t dfn, mfn_t > mfn, > for ( i = 0; i < (1ul << page_order); i++ ) > { > rc = hd->platform_ops->map_page(d, dfn_add(dfn, i), > - mfn_add(mfn, i), flags); > + mfn_add(mfn, i), flags, > + flush_flags); Again no need for two lines here as it seems. > @@ -345,7 +353,20 @@ int iommu_legacy_map(struct domain *d, dfn_t dfn, mfn_t > mfn, > return rc; > } > > -int iommu_legacy_unmap(struct domain *d, dfn_t dfn, unsigned int page_order) > +int iommu_legacy_map(struct domain *d, dfn_t dfn, mfn_t mfn, > + unsigned int page_order, unsigned int flags) > +{ > + unsigned int flush_flags = 0; > + int rc = iommu_map(d, dfn, mfn, page_order, flags, &flush_flags); > + > + if ( !rc && !this_cpu(iommu_dont_flush_iotlb) ) > + rc = iommu_iotlb_flush(d, dfn, (1u << page_order), flush_flags); The question was raised in a different context (but iirc this same series) already: Is it correct to skip flushing when failure occurred on other than the first page of a set? There's no rollback afaict, and even if there was the transiently available mappings would then still need purging. Same on the unmap side then. (Note that this is different from the arch_iommu_populate_page_table() case, where I/O can't be initiated yet by the guest.) > @@ -241,8 +245,10 @@ void __hwdom_init arch_iommu_hwdom_init(struct domain *d) > if ( paging_mode_translate(d) ) > rc = set_identity_p2m_entry(d, pfn, p2m_access_rw, 0); > else > - rc = iommu_legacy_map(d, _dfn(pfn), _mfn(pfn), PAGE_ORDER_4K, > - IOMMUF_readable | IOMMUF_writable); > + rc = iommu_map(d, _dfn(pfn), _mfn(pfn), PAGE_ORDER_4K, > + IOMMUF_readable | IOMMUF_writable, > + &flush_flags); Again overly aggressive line wrapping? Jan _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxxx https://lists.xenproject.org/mailman/listinfo/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |