[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [PATCH v1 02/16] iommu/dma: handle MMIO path in dma_iova_link
- To: Leon Romanovsky <leon@xxxxxxxxxx>
- From: Jason Gunthorpe <jgg@xxxxxxxxxx>
- Date: Wed, 6 Aug 2025 15:10:41 -0300
- Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=nvidia.com; dmarc=pass action=none header.from=nvidia.com; dkim=pass header.d=nvidia.com; arc=none
- Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=BhmvRULt71lzNlfdmUfFLRjqdndltyCAJJ0V75nB3/U=; b=OfRNJMP8osnjMHer6JHInrB65m7fKLygp+Zaw4kPvI/fO2epH1i54gQoMtc/g7fcD9oCsP3R4D7x5HQrqLdVpmAjYX1fI1UZo2kgN1eJG+WzQgWenMsETr6zko5fX6OsVPba4mqTddkBLydlbQkDWEzamKI5CDfzaVKtaSIzRY9RSbKuv5DjHhaQ3Ja3mWJpxdvT4gdOv/JxTLdThDEal0FfLizU8YSyasLE6+Yebri6vKbxL5AfS3cu8gjlyHfcdXsqqbB6fv3jl7ZY4TUVG4eb5xOMbSDNyXi+wBRbChzImtDLGpmevb9ks0SJ3eQoWEXQCPq3fIHE7DyNJRxb8g==
- Arc-seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=F/xGIwO8l037lIet6XDnEcFADS1S4j1v+xJwq0xbosit3tvuqIY0absR1Ze6wMZ/TKCGl5espW1F2qu9dgEs6hTCxmawYCRVyDJjqkx7uD10rBjCwn9thhOntxISwFDEopAsxahHsMDlteNOsTJTkCdL2YGQDqIORFmMOM+3FPlwgdzAeqB2dGIczqSr30NvwqpnUgihCVGp1w5qoXa+kZKJKBiL8eIGeNkj+On/3ej5JAQzmOa1eNWrANLxLbe4sN9Kqc3Ro6ZWap3O9DIHRdwYlqL09vOTdhX/ylkh+94s1oda8c4XWXi1LgmDah3KH5A6OmG3mbY2sYOXPqS5aA==
- Authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=nvidia.com;
- Cc: Marek Szyprowski <m.szyprowski@xxxxxxxxxxx>, Leon Romanovsky <leonro@xxxxxxxxxx>, Abdiel Janulgue <abdiel.janulgue@xxxxxxxxx>, Alexander Potapenko <glider@xxxxxxxxxx>, Alex Gaynor <alex.gaynor@xxxxxxxxx>, Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx>, Christoph Hellwig <hch@xxxxxx>, Danilo Krummrich <dakr@xxxxxxxxxx>, iommu@xxxxxxxxxxxxxxx, Jason Wang <jasowang@xxxxxxxxxx>, Jens Axboe <axboe@xxxxxxxxx>, Joerg Roedel <joro@xxxxxxxxxx>, Jonathan Corbet <corbet@xxxxxxx>, Juergen Gross <jgross@xxxxxxxx>, kasan-dev@xxxxxxxxxxxxxxxx, Keith Busch <kbusch@xxxxxxxxxx>, linux-block@xxxxxxxxxxxxxxx, linux-doc@xxxxxxxxxxxxxxx, linux-kernel@xxxxxxxxxxxxxxx, linux-mm@xxxxxxxxx, linux-nvme@xxxxxxxxxxxxxxxxxxx, linuxppc-dev@xxxxxxxxxxxxxxxx, linux-trace-kernel@xxxxxxxxxxxxxxx, Madhavan Srinivasan <maddy@xxxxxxxxxxxxx>, Masami Hiramatsu <mhiramat@xxxxxxxxxx>, Michael Ellerman <mpe@xxxxxxxxxxxxxx>, "Michael S. Tsirkin" <mst@xxxxxxxxxx>, Miguel Ojeda <ojeda@xxxxxxxxxx>, Robin Murphy <robin.murphy@xxxxxxx>, rust-for-linux@xxxxxxxxxxxxxxx, Sagi Grimberg <sagi@xxxxxxxxxxx>, Stefano Stabellini <sstabellini@xxxxxxxxxx>, Steven Rostedt <rostedt@xxxxxxxxxxx>, virtualization@xxxxxxxxxxxxxxx, Will Deacon <will@xxxxxxxxxx>, xen-devel@xxxxxxxxxxxxxxxxxxxx
- Delivery-date: Wed, 06 Aug 2025 18:11:01 +0000
- List-id: Xen developer discussion <xen-devel.lists.xenproject.org>
On Mon, Aug 04, 2025 at 03:42:36PM +0300, Leon Romanovsky wrote:
> From: Leon Romanovsky <leonro@xxxxxxxxxx>
>
> Make sure that CPU is not synced if MMIO path is taken.
Let's elaborate..
Implement DMA_ATTR_MMIO for dma_iova_link().
This will replace the hacky use of DMA_ATTR_SKIP_CPU_SYNC to avoid
touching the possibly non-KVA MMIO memory.
Also correct the incorrect caching attribute for the IOMMU, MMIO
memory should not be cachable inside the IOMMU mapping or it can
possibly create system problems. Set IOMMU_MMIO for DMA_ATTR_MMIO.
> diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
> index ea2ef53bd4fef..399838c17b705 100644
> --- a/drivers/iommu/dma-iommu.c
> +++ b/drivers/iommu/dma-iommu.c
> @@ -1837,13 +1837,20 @@ static int __dma_iova_link(struct device *dev,
> dma_addr_t addr,
> phys_addr_t phys, size_t size, enum dma_data_direction dir,
> unsigned long attrs)
> {
> - bool coherent = dev_is_dma_coherent(dev);
> + int prot;
>
> - if (!coherent && !(attrs & DMA_ATTR_SKIP_CPU_SYNC))
> - arch_sync_dma_for_device(phys, size, dir);
> + if (attrs & DMA_ATTR_MMIO)
> + prot = dma_info_to_prot(dir, false, attrs) | IOMMU_MMIO;
Yeah, exactly, we need the IOPTE on ARM to have the right cachability
or some systems might go wrong.
> + else {
> + bool coherent = dev_is_dma_coherent(dev);
> +
> + if (!coherent && !(attrs & DMA_ATTR_SKIP_CPU_SYNC))
> + arch_sync_dma_for_device(phys, size, dir);
> + prot = dma_info_to_prot(dir, coherent, attrs);
> + }
>
> return iommu_map_nosync(iommu_get_dma_domain(dev), addr, phys, size,
> - dma_info_to_prot(dir, coherent, attrs), GFP_ATOMIC);
> + prot, GFP_ATOMIC);
> }
Hmm, I missed this in prior series, ideally the GFP_ATOMIC should be
passed in as a gfp_t here so we can use GFP_KERNEL in callers that are
able.
Reviewed-by: Jason Gunthorpe <jgg@xxxxxxxxxx>
Jason
|