[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[PATCH v2 1/2] xen/swiotlb: add alignment check for dma buffers



When checking a memory buffer to be consecutive in machine memory,
the alignment needs to be checked, too. Failing to do so might result
in DMA memory not being aligned according to its requested size,
leading to error messages like:

  4xxx 0000:2b:00.0: enabling device (0140 -> 0142)
  4xxx 0000:2b:00.0: Ring address not aligned
  4xxx 0000:2b:00.0: Failed to initialise service qat_crypto
  4xxx 0000:2b:00.0: Resetting device qat_dev0
  4xxx: probe of 0000:2b:00.0 failed with error -14

Fixes: 9435cce87950 ("xen/swiotlb: Add support for 64KB page granularity")
Signed-off-by: Juergen Gross <jgross@xxxxxxxx>
---
V2:
- use 1ULL for creating align mask in order to cover ARM32 LPAE
- fix case of XEN_PAGE_SIZE != PAGE_SIZE (Jan Beulich)
---
 drivers/xen/swiotlb-xen.c | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
index 35155258a7e2..ddf5b1df632e 100644
--- a/drivers/xen/swiotlb-xen.c
+++ b/drivers/xen/swiotlb-xen.c
@@ -78,9 +78,15 @@ static inline int range_straddles_page_boundary(phys_addr_t 
p, size_t size)
 {
        unsigned long next_bfn, xen_pfn = XEN_PFN_DOWN(p);
        unsigned int i, nr_pages = XEN_PFN_UP(xen_offset_in_page(p) + size);
+       phys_addr_t algn = 1ULL << (get_order(size) + PAGE_SHIFT);
 
        next_bfn = pfn_to_bfn(xen_pfn);
 
+       /* If buffer is physically aligned, ensure DMA alignment. */
+       if (IS_ALIGNED(p, algn) &&
+           !IS_ALIGNED(next_bfn << XEN_PAGE_SHIFT, algn))
+               return 1;
+
        for (i = 1; i < nr_pages; i++)
                if (pfn_to_bfn(++xen_pfn) != ++next_bfn)
                        return 1;
-- 
2.43.0




 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.