[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [RFC PATCH v3 2/6] swiotlb: Add restricted DMA pool



On 2021-01-13 17:43, Florian Fainelli wrote:
On 1/13/21 7:27 AM, Robin Murphy wrote:
On 2021-01-13 13:59, Nicolas Saenz Julienne wrote:
Hi All,

On Tue, 2021-01-12 at 16:03 -0800, Florian Fainelli wrote:
On 1/5/21 7:41 PM, Claire Chang wrote:
Add the initialization function to create restricted DMA pools from
matching reserved-memory nodes in the device tree.

Signed-off-by: Claire Chang <tientzu@xxxxxxxxxxxx>
---
   include/linux/device.h  |   4 ++
   include/linux/swiotlb.h |   7 +-
   kernel/dma/Kconfig      |   1 +
   kernel/dma/swiotlb.c    | 144
++++++++++++++++++++++++++++++++++------
   4 files changed, 131 insertions(+), 25 deletions(-)

diff --git a/include/linux/device.h b/include/linux/device.h
index 89bb8b84173e..ca6f71ec8871 100644
--- a/include/linux/device.h
+++ b/include/linux/device.h
@@ -413,6 +413,7 @@ struct dev_links_info {
    * @dma_pools:    Dma pools (if dma'ble device).
    * @dma_mem:    Internal for coherent mem override.
    * @cma_area:    Contiguous memory area for dma allocations
+ * @dma_io_tlb_mem: Internal for swiotlb io_tlb_mem override.
    * @archdata:    For arch-specific additions.
    * @of_node:    Associated device tree node.
    * @fwnode:    Associated device node supplied by platform firmware.
@@ -515,6 +516,9 @@ struct device {
   #ifdef CONFIG_DMA_CMA
       struct cma *cma_area;        /* contiguous memory area for dma
                          allocations */
+#endif
+#ifdef CONFIG_SWIOTLB
+    struct io_tlb_mem    *dma_io_tlb_mem;
   #endif
       /* arch specific additions */
       struct dev_archdata    archdata;
diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h
index dd8eb57cbb8f..a1bbd7788885 100644
--- a/include/linux/swiotlb.h
+++ b/include/linux/swiotlb.h
@@ -76,12 +76,13 @@ extern enum swiotlb_force swiotlb_force;
    *
    * @start:    The start address of the swiotlb memory pool. Used
to do a quick
    *        range check to see if the memory was in fact allocated
by this
- *        API.
+ *        API. For restricted DMA pool, this is device tree
adjustable.

Maybe write it as this is "firmware adjustable" such that when/if ACPI
needs something like this, the description does not need updating.

TBH I really don't think this needs calling out at all. Even in the
regular case, the details of exactly how and where the pool is allocated
are beyond the scope of this code - architectures already have several
ways to control that and make their own decisions.


[snip]

+static int rmem_swiotlb_device_init(struct reserved_mem *rmem,
+                    struct device *dev)
+{
+    struct io_tlb_mem *mem = rmem->priv;
+    int ret;
+
+    if (dev->dma_io_tlb_mem)
+        return -EBUSY;
+
+    if (!mem) {
+        mem = kzalloc(sizeof(*mem), GFP_KERNEL);
+        if (!mem)
+            return -ENOMEM;
+
+        if (!memremap(rmem->base, rmem->size, MEMREMAP_WB)) {

MEMREMAP_WB sounds appropriate as a default.

As per the binding 'no-map' has to be disabled here. So AFAIU, this
memory will
be part of the linear mapping. Is this really needed then?

More than that, I'd assume that we *have* to use the linear/direct map
address rather than anything that has any possibility of being a vmalloc
remap, otherwise we can no longer safely rely on
phys_to_dma/dma_to_phys, no?

I believe you are right, which means that if we want to make use of the
restricted DMA pool on a 32-bit architecture (and we do, at least, I do)
we should probably add some error checking/warning to ensure the
restricted DMA pool falls within the linear map.

Oh, good point - I'm so used to 64-bit that I instinctively just blanked out the !PageHighMem() condition in try_ram_remap(). So maybe the original intent here *was* to effectively just implement that check, but if so it could still do with being a lot more explicit.

Cheers,
Robin.



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.