|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH V1 3/6] xen/virtio: Add option to restrict memory access under Xen
On 23.04.22 19:40, Christoph Hellwig wrote: Hello Christoph Please split this into one patch that creates grant-dma-ops, and another that sets up the virtio restricted access helpers. Sounds reasonable, will do: 1. grant-dma-ops.c with config XEN_GRANT_DMA_OPS 2. arch_has_restricted_virtio_memory_access() with config XEN_VIRTIO
I have a limited knowledge of x86 and Xen on x86.Would the Xen specific bits fit into Confidential Computing Platform checks? I will let Juergen/Boris comment on this. +config XEN_VIRTIO + bool "Xen virtio support" + default nn is the default default, so no need to specify it. ok, will drop +// SPDX-License-Identifier: GPL-2.0-only +/******************************************************************************The all * line is not the usual kernel style, I'd suggest to drop it. ok, will drop
I got it, will implement + spin_lock(&xen_grant_dma_lock); + list_add(&data->list, &xen_grant_dma_devices); + spin_unlock(&xen_grant_dma_lock);Hmm, having to do this device lookup for every DMA operation is going to suck. It might make sense to add a private field (e.g. as a union with the iommu field) in struct device instead. I was thinking about it, but decided to not alter common struct device for adding Xen specific field, but haven't managed to think of a better idea than just using that brute lookup ... But if not you probably want to switch to a more efficient data structure like the xarray at least. ... I think, this is good point, thank you. I have no idea how faster it is going to be, but the resulting code looks simple (if of course I correctly understood the usage of xarray)
diff --git a/drivers/xen/grant-dma-ops.c b/drivers/xen/grant-dma-ops.c
index a512c0a..7ecc0b0 100644
--- a/drivers/xen/grant-dma-ops.c
+++ b/drivers/xen/grant-dma-ops.c
@@ -11,6 +11,7 @@
#include <linux/dma-map-ops.h>
#include <linux/of.h>
#include <linux/pfn.h>
+#include <linux/xarray.h>
#include <xen/xen.h>
#include <xen/grant_table.h>
@@ -19,12 +20,9 @@ struct xen_grant_dma_data {
domid_t dev_domid;
/* Is device behaving sane? */
bool broken;
- struct device *dev;
- struct list_head list;
};
-static LIST_HEAD(xen_grant_dma_devices);
-static DEFINE_SPINLOCK(xen_grant_dma_lock);
+static DEFINE_XARRAY(xen_grant_dma_devices);
#define XEN_GRANT_DMA_ADDR_OFF (1ULL << 63)
@@ -40,21 +38,13 @@ static inline grant_ref_t dma_to_grant(dma_addr_t dma)
static struct xen_grant_dma_data *find_xen_grant_dma_data(struct
device *dev)
{
- struct xen_grant_dma_data *data = NULL;
- bool found = false;
-
- spin_lock(&xen_grant_dma_lock);
-
- list_for_each_entry(data, &xen_grant_dma_devices, list) {
- if (data->dev == dev) {
- found = true;
- break;
- }
- }
+ struct xen_grant_dma_data *data;
- spin_unlock(&xen_grant_dma_lock);
+ xa_lock(&xen_grant_dma_devices);
+ data = xa_load(&xen_grant_dma_devices, (unsigned long)dev);
+ xa_unlock(&xen_grant_dma_devices);
- return found ? data : NULL;
+ return data;
}
/*
@@ -310,11 +300,12 @@ void xen_grant_setup_dma_ops(struct device *dev)
goto err;
data->dev_domid = dev_domid;
- data->dev = dev;
- spin_lock(&xen_grant_dma_lock);
- list_add(&data->list, &xen_grant_dma_devices);
- spin_unlock(&xen_grant_dma_lock);
+ if (xa_err(xa_store(&xen_grant_dma_devices, (unsigned long)dev,
data,
+ GFP_KERNEL))) {
+ dev_err(dev, "Cannot store Xen grant DMA data\n");
+ goto err;
+ }
dev->dma_ops = &xen_grant_dma_ops;
+EXPORT_SYMBOL_GPL(xen_grant_setup_dma_ops);I don't think this has any modular users, or did I miss something? No, you didn't. Will drop here and in the next patch for xen_is_grant_dma_device() as well. -- Regards, Oleksandr Tyshchenko
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |