[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[PATCH RFC v2 8/8] dma-iommu: Support DMA sync batch mode for iommu_dma_sync_sg_for_{cpu, device}



From: Barry Song <baohua@xxxxxxxxxx>

Apply batched DMA synchronization to iommu_dma_sync_sg_for_cpu() and
iommu_dma_sync_sg_for_device(). For all buffers in an SG list, only
a single flush operation is needed.

I do not have the hardware to test this, so the patch is marked as
RFC. I would greatly appreciate any testing feedback.

Cc: Leon Romanovsky <leon@xxxxxxxxxx>
Cc: Marek Szyprowski <m.szyprowski@xxxxxxxxxxx>
Cc: Catalin Marinas <catalin.marinas@xxxxxxx>
Cc: Will Deacon <will@xxxxxxxxxx>
Cc: Ada Couprie Diaz <ada.coupriediaz@xxxxxxx>
Cc: Ard Biesheuvel <ardb@xxxxxxxxxx>
Cc: Marc Zyngier <maz@xxxxxxxxxx>
Cc: Anshuman Khandual <anshuman.khandual@xxxxxxx>
Cc: Ryan Roberts <ryan.roberts@xxxxxxx>
Cc: Suren Baghdasaryan <surenb@xxxxxxxxxx>
Cc: Robin Murphy <robin.murphy@xxxxxxx>
Cc: Joerg Roedel <joro@xxxxxxxxxx>
Cc: Tangquan Zheng <zhengtangquan@xxxxxxxx>
Signed-off-by: Barry Song <baohua@xxxxxxxxxx>
---
 drivers/iommu/dma-iommu.c | 15 +++++++--------
 1 file changed, 7 insertions(+), 8 deletions(-)

diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
index ffa940bdbbaf..b68dbfcb7846 100644
--- a/drivers/iommu/dma-iommu.c
+++ b/drivers/iommu/dma-iommu.c
@@ -1131,10 +1131,9 @@ void iommu_dma_sync_sg_for_cpu(struct device *dev, 
struct scatterlist *sgl,
                        iommu_dma_sync_single_for_cpu(dev, sg_dma_address(sg),
                                                      sg->length, dir);
        } else if (!dev_is_dma_coherent(dev)) {
-               for_each_sg(sgl, sg, nelems, i) {
+               for_each_sg(sgl, sg, nelems, i)
                        arch_sync_dma_for_cpu(sg_phys(sg), sg->length, dir);
-                       arch_sync_dma_flush();
-               }
+               arch_sync_dma_flush();
        }
 }
 
@@ -1144,16 +1143,16 @@ void iommu_dma_sync_sg_for_device(struct device *dev, 
struct scatterlist *sgl,
        struct scatterlist *sg;
        int i;
 
-       if (sg_dma_is_swiotlb(sgl))
+       if (sg_dma_is_swiotlb(sgl)) {
                for_each_sg(sgl, sg, nelems, i)
                        iommu_dma_sync_single_for_device(dev,
                                                         sg_dma_address(sg),
                                                         sg->length, dir);
-       else if (!dev_is_dma_coherent(dev))
-               for_each_sg(sgl, sg, nelems, i) {
+       } else if (!dev_is_dma_coherent(dev)) {
+               for_each_sg(sgl, sg, nelems, i)
                        arch_sync_dma_for_device(sg_phys(sg), sg->length, dir);
-                       arch_sync_dma_flush();
-               }
+               arch_sync_dma_flush();
+       }
 }
 
 static phys_addr_t iommu_dma_map_swiotlb(struct device *dev, phys_addr_t phys,
-- 
2.43.0




 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.