[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [XEN][PATCH v9 14/19] common/device_tree: Add rwlock for dt_host
Hi, On Thu, Aug 24, 2023 at 11:22:00PM -0700, Vikram Garhwal wrote: > Hi Julien, > On Wed, Aug 23, 2023 at 11:06:59PM +0100, Julien Grall wrote: > > Hi Vikram, > > > > On 19/08/2023 01:28, Vikram Garhwal wrote: > > > Dynamic programming ops will modify the dt_host and there might be other > > > function which are browsing the dt_host at the same time. To avoid the > > > race > > > > Typo: I think you want to write 'functions' > > > > > conditions, adding rwlock for browsing the dt_host during runtime. > > > dt_host > > > writer will be added in the follow-up patch titled "xen/arm: Implement > > > device > > > tree node addition functionalities." > > > > I would prefer if we avoid mention the name of the follow-up commit. This > > will reduce the risk that the name of the commit is incorrect if we decide > > to commit this patch before the rest of the series is ready. > > > > Also, the commit message seems to be indented. Was it intended? > > > > > > > > Reason behind adding rwlock instead of spinlock: > > > For now, dynamic programming is the sole modifier of dt_host in Xen > > > during > > > run time. All other access functions like > > > iommu_release_dt_device() are > > > just reading the dt_host during run-time. So, there is a need to > > > protect > > > others from browsing the dt_host while dynamic programming is > > > modifying > > > it. rwlock is better suitable for this task as spinlock won't be > > > able to > > > differentiate between read and write access. > > > > The indentation looks odd here as well. > > > Changed above comments in v10. > > > > > > Signed-off-by: Vikram Garhwal <vikram.garhwal@xxxxxxx> > > > > > > --- > > > Changes from v7: > > > Keep one lock for dt_host instead of lock for each node under > > > dt_host. > > > --- > > > --- > > > xen/common/device_tree.c | 5 +++++ > > > xen/drivers/passthrough/device_tree.c | 15 +++++++++++++++ > > > xen/include/xen/device_tree.h | 6 ++++++ > > > 3 files changed, 26 insertions(+) > > > > I am not sue where to put the comment. I noticed that you didn't touch > > iommu_remove_dt_device() and iommu_add_dt_device(). Does this mean the > > caller is expected to held the lock? If so, then this should be documented > > and an ASSERT() should be added. > Added ASSERT in iommu_(add,remove,assign and deassign)_dt_device(), iommu_add_ and iommu_assign_ are called at boot time. Also, only other callers are handle_device via overlays and iommu_do_dt_domctl() which will hold the dt_host_lock. Will look into it more but for now sending v10 with ASSER in these two functions. > > > > > > > > diff --git a/xen/common/device_tree.c b/xen/common/device_tree.c > > > index 0f10037745..6b934fe036 100644 > > > --- a/xen/common/device_tree.c > > > +++ b/xen/common/device_tree.c > > > @@ -31,6 +31,7 @@ dt_irq_xlate_func dt_irq_xlate; > > > struct dt_device_node *dt_host; > > > /* Interrupt controller node*/ > > > const struct dt_device_node *dt_interrupt_controller; > > > +rwlock_t dt_host_lock; > > > /** > > > * struct dt_alias_prop - Alias property in 'aliases' node > > > @@ -2137,7 +2138,11 @@ int unflatten_device_tree(const void *fdt, struct > > > dt_device_node **mynodes) > > > dt_dprintk(" <- unflatten_device_tree()\n"); > > > + /* Init r/w lock for host device tree. */ > > > + rwlock_init(&dt_host_lock); > > > > Calling rwlock_init() from unflattent_device_tree() seems to be incorrect as > > it would lead to re-initialize the lock every time we are create a new DT > > overlay. > > > > Instead you want to replace the definition of dt_host_lock with: > > > > DEFINE_RWLOCK(dt_host_lock) > > > Changed this. DEFINE_RWLOCK is added to device-tree.c and this is removed. > > > + > > > return 0; > > > + > > > > Spurious change? > > > > > } > > > static void dt_alias_add(struct dt_alias_prop *ap, > > > diff --git a/xen/drivers/passthrough/device_tree.c > > > b/xen/drivers/passthrough/device_tree.c > > > index 4cb32dc0b3..31815d2b60 100644 > > > --- a/xen/drivers/passthrough/device_tree.c > > > +++ b/xen/drivers/passthrough/device_tree.c > > > @@ -114,6 +114,8 @@ int iommu_release_dt_devices(struct domain *d) > > > if ( !is_iommu_enabled(d) ) > > > return 0; > > > + read_lock(&dt_host_lock); > > > + > > > list_for_each_entry_safe(dev, _dev, &hd->dt_devices, domain_list) > > > { > > > rc = iommu_deassign_dt_device(d, dev); > > > > So iommu_deassign_dt_device() is now called with the read lock. I am > > assuming the intention is all the caller will need to fist held the lock. If > > so, then I think this would require an ASSERT() in > > iommu_deassign_dt_device() and a comment on top of the function because it > > is exported. > > > > I am guessing that iommu_assign_dt_device() is in the same situation. > > > > > > > @@ -121,10 +123,14 @@ int iommu_release_dt_devices(struct domain *d) > > > { > > > dprintk(XENLOG_ERR, "Failed to deassign %s in domain %u\n", > > > dt_node_full_name(dev), d->domain_id); > > > + > > > + read_unlock(&dt_host_lock); > > > > Coding style: Usually we add the newline before the return. So I would > > switch around the two lines. > > > > > return rc; > > > } > > > } > > > + read_unlock(&dt_host_lock); > > > + > > > return 0; > > > } > > > @@ -251,6 +257,8 @@ int iommu_do_dt_domctl(struct xen_domctl *domctl, > > > struct domain *d, > > > int ret; > > > struct dt_device_node *dev; > > > + read_lock(&dt_host_lock); > > > + > > > switch ( domctl->cmd ) > > > { > > > case XEN_DOMCTL_assign_device: > > > @@ -304,7 +312,10 @@ int iommu_do_dt_domctl(struct xen_domctl *domctl, > > > struct domain *d, > > > spin_unlock(&dtdevs_lock); > > > if ( d == dom_io ) > > > + { > > > + read_unlock(&dt_host_lock); > > > return -EINVAL; > > > > NIT: Rather than adding the unlock here, you could use: > > > > rc = -EINVAL; > > break; > > > > > + } > > > ret = iommu_add_dt_device(dev); > > > if ( ret < 0 ) > > > @@ -342,7 +353,10 @@ int iommu_do_dt_domctl(struct xen_domctl *domctl, > > > struct domain *d, > > > break; > > > if ( d == dom_io ) > > > + { > > > + read_unlock(&dt_host_lock); > > > return -EINVAL; > > > + } > > > > NIT: Same here. > > > > > ret = iommu_deassign_dt_device(d, dev); > > > @@ -357,5 +371,6 @@ int iommu_do_dt_domctl(struct xen_domctl *domctl, > > > struct domain *d, > > > break; > > > } > > > + read_unlock(&dt_host_lock); > > > > Coding style: Please add a newline. > > > Changed all above coding styles. > > > return ret; > > > } > > > diff --git a/xen/include/xen/device_tree.h b/xen/include/xen/device_tree.h > > > index e507658b23..8191f30197 100644 > > > --- a/xen/include/xen/device_tree.h > > > +++ b/xen/include/xen/device_tree.h > > > @@ -18,6 +18,7 @@ > > > #include <xen/string.h> > > > #include <xen/types.h> > > > #include <xen/list.h> > > > +#include <xen/rwlock.h> > > > #define DEVICE_TREE_MAX_DEPTH 16 > > > @@ -216,6 +217,11 @@ extern struct dt_device_node *dt_host; > > > */ > > > extern const struct dt_device_node *dt_interrupt_controller; > > > +/* > > > + * Lock that protects r/w updates to unflattened device tree i.e. > > > dt_host. > > > + */ > > > > The wording suggests that any update to any node would require to hold the > > write lock. However.. it looks like you are only holding the read when > > setting is_protected in the SMMU remove callback. Is this intended? > > > > Or maybe you expect is_protected by to protected by dtdevs_lock? If so, then > > I think it would be good to spell it out. Possibly on top of is_protected. > > > Yes, dtdevs_lock will be held to avoid concurrent calls to SMMU remove. > > Lastly, there are plenty of place in Xen where the lock is not taken. They > > mostly seem to be at boot, so I would mention that for boot only code, then > > lock may not be taken. > Updated. > > > > Lastly, this is a single line comment, so the coding style should be: > > > > /* ... */ > > > > > +extern rwlock_t dt_host_lock; > > > + > > > /** > > > * Find the interrupt controller > > > * For the moment we handle only one interrupt controller: the first > > > > Cheers, > > > > -- > > Julien Grall
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |