[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [RFC PATCH v3 2/6] swiotlb: Add restricted DMA pool

On 2021-01-13 13:59, Nicolas Saenz Julienne wrote:
Hi All,

On Tue, 2021-01-12 at 16:03 -0800, Florian Fainelli wrote:
On 1/5/21 7:41 PM, Claire Chang wrote:
Add the initialization function to create restricted DMA pools from
matching reserved-memory nodes in the device tree.

Signed-off-by: Claire Chang <tientzu@xxxxxxxxxxxx>
  include/linux/device.h  |   4 ++
  include/linux/swiotlb.h |   7 +-
  kernel/dma/Kconfig      |   1 +
  kernel/dma/swiotlb.c    | 144 ++++++++++++++++++++++++++++++++++------
  4 files changed, 131 insertions(+), 25 deletions(-)

diff --git a/include/linux/device.h b/include/linux/device.h
index 89bb8b84173e..ca6f71ec8871 100644
--- a/include/linux/device.h
+++ b/include/linux/device.h
@@ -413,6 +413,7 @@ struct dev_links_info {
   * @dma_pools:      Dma pools (if dma'ble device).
   * @dma_mem:        Internal for coherent mem override.
   * @cma_area:       Contiguous memory area for dma allocations
+ * @dma_io_tlb_mem: Internal for swiotlb io_tlb_mem override.
   * @archdata:       For arch-specific additions.
   * @of_node:        Associated device tree node.
   * @fwnode: Associated device node supplied by platform firmware.
@@ -515,6 +516,9 @@ struct device {
   struct cma *cma_area;           /* contiguous memory area for dma
                                      allocations */
+       struct io_tlb_mem       *dma_io_tlb_mem;
   /* arch specific additions */
   struct dev_archdata     archdata;
diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h
index dd8eb57cbb8f..a1bbd7788885 100644
--- a/include/linux/swiotlb.h
+++ b/include/linux/swiotlb.h
@@ -76,12 +76,13 @@ extern enum swiotlb_force swiotlb_force;
   * @start:  The start address of the swiotlb memory pool. Used to do a quick
   *          range check to see if the memory was in fact allocated by this
- *             API.
+ *             API. For restricted DMA pool, this is device tree adjustable.

Maybe write it as this is "firmware adjustable" such that when/if ACPI
needs something like this, the description does not need updating.

TBH I really don't think this needs calling out at all. Even in the regular case, the details of exactly how and where the pool is allocated are beyond the scope of this code - architectures already have several ways to control that and make their own decisions.


+static int rmem_swiotlb_device_init(struct reserved_mem *rmem,
+                                   struct device *dev)
+       struct io_tlb_mem *mem = rmem->priv;
+       int ret;
+       if (dev->dma_io_tlb_mem)
+               return -EBUSY;
+       if (!mem) {
+               mem = kzalloc(sizeof(*mem), GFP_KERNEL);
+               if (!mem)
+                       return -ENOMEM;
+               if (!memremap(rmem->base, rmem->size, MEMREMAP_WB)) {

MEMREMAP_WB sounds appropriate as a default.

As per the binding 'no-map' has to be disabled here. So AFAIU, this memory will
be part of the linear mapping. Is this really needed then?

More than that, I'd assume that we *have* to use the linear/direct map address rather than anything that has any possibility of being a vmalloc remap, otherwise we can no longer safely rely on phys_to_dma/dma_to_phys, no?

That said, given that we're not actually using the returned address, I'm not entirely sure what the point of this call is anyway. If we can assume it's always going to return the linear map address via try_ram_remap() then we can equally just go ahead and use the linear map address straight away. I don't really see how we could ever hit the "is_ram == REGION_MIXED" case in memremap() that would return NULL, if we passed the memblock check earlier in __reserved_mem_alloc_size() such that this rmem node ever got to be initialised at all.


Documentation/devicetree/bindings/reserved-memory/ramoops.txt does
define an "unbuffered" property which in premise could be applied to the
generic reserved memory binding as well and that we may have to be
honoring here, if we were to make it more generic. Oh well, this does
not need to be addressed right now I guess.



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.