[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH 1/2] xen: swiotlb: Use swiotlb bouncing if kmalloc allocation demands it



On Fri, May 02, 2025 at 11:40:55AM +0000, John Ernberg wrote:
> Xen swiotlb support was missed when the patch set starting with
> 4ab5f8ec7d71 ("mm/slab: decouple ARCH_KMALLOC_MINALIGN from
> ARCH_DMA_MINALIGN") was merged.
> 
> When running Xen on iMX8QXP, a SoC without IOMMU, the effect was that USB
> transfers ended up corrupted when there was more than one URB inflight at
> the same time.
> 
> Add a call to dma_kmalloc_needs_bounce() to make sure that allocations too
> small for DMA get bounced via swiotlb.
> 
> Closes: 
> https://lore.kernel.org/linux-usb/ab2776f0-b838-4cf6-a12a-c208eb6aad59@xxxxxxxx/
> Fixes: 4ab5f8ec7d71 ("mm/slab: decouple ARCH_KMALLOC_MINALIGN from 
> ARCH_DMA_MINALIGN")
> Cc: stable@xxxxxxxxxx # v6.5+
> Signed-off-by: John Ernberg <john.ernberg@xxxxxxxx>
> 
> ---
> 
> It's impossible to pick an exact fixes tag since this driver was missed
> in the flagged patch set. I picked one I felt gave a decent enough picture
> for someone coming across this later.

All the above patches went in at the same time in 6.5, so it probably
doesn't matter. In theory, you could add:

Fixes: 370645f41e6e ("dma-mapping: force bouncing if the kmalloc() size is not 
cache-line-aligned")
Cc: <stable@xxxxxxxxxxxxxxx> # 6.5.x

as that's when dma_kmalloc_needs_bounce() was added (a few commits after
the "decouple ARCH_KMALLOC_MINALIGN..." one). However, actual problems
started to appear with commit 9382bc44b5f5 ("arm64: allow kmalloc()
caches aligned to the smaller cache_line_size()") which makes
ARCH_KMALLOC_MINALIGN equal 8 on arm64.

> ---
>  drivers/xen/swiotlb-xen.c | 1 +
>  1 file changed, 1 insertion(+)
> 
> diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
> index 1f65795cf5d7..ef56a2500ed6 100644
> --- a/drivers/xen/swiotlb-xen.c
> +++ b/drivers/xen/swiotlb-xen.c
> @@ -217,6 +217,7 @@ static dma_addr_t xen_swiotlb_map_page(struct device 
> *dev, struct page *page,
>        * buffering it.
>        */
>       if (dma_capable(dev, dev_addr, size, true) &&
> +         !dma_kmalloc_needs_bounce(dev, size, dir) &&
>           !range_straddles_page_boundary(phys, size) &&
>               !xen_arch_need_swiotlb(dev, phys, dev_addr) &&
>               !is_swiotlb_force_bounce(dev))

Reviewed-by: Catalin Marinas <catalin.marinas@xxxxxxx>



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.