[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH v12.2 01/15] vpci: use per-domain PCI lock to protect vpci structure
On 15.01.2024 20:43, Stewart Hildebrand wrote: > --- a/xen/arch/x86/hvm/vmsi.c > +++ b/xen/arch/x86/hvm/vmsi.c > @@ -468,7 +468,7 @@ int msixtbl_pt_register(struct domain *d, struct pirq > *pirq, uint64_t gtable) > struct msixtbl_entry *entry, *new_entry; > int r = -EINVAL; > > - ASSERT(pcidevs_locked()); > + ASSERT(pcidevs_locked() || rw_is_locked(&d->pci_lock)); > ASSERT(rw_is_write_locked(&d->event_lock)); > > if ( !msixtbl_initialised(d) ) > @@ -538,7 +538,7 @@ void msixtbl_pt_unregister(struct domain *d, struct pirq > *pirq) > struct pci_dev *pdev; > struct msixtbl_entry *entry; > > - ASSERT(pcidevs_locked()); > + ASSERT(pcidevs_locked() || rw_is_locked(&d->pci_lock)); > ASSERT(rw_is_write_locked(&d->event_lock)); I was hoping to just ack this patch, but the two changes above look questionable to me: How can it be that holding _either_ lock is okay? It's not obvious in this context that consumers have to hold both locks now. In fact consumers looks to be the callers of msixtbl_find_entry(), yet the list is RCU-protected. Whereas races against themselves or against one another are avoided by holding d->event_lock. My only guess then for the original need of holding pcidevs_lock is the use of msi_desc->dev, with the desire for the device to not go away. Yet the description doesn't talk about interactions of the per- domain PCI lock with that one at all; it all circles around the domain'd vPCI lock. Feels like I'm missing something that's obvious to everyone else. Or maybe this part of the patch is actually unrelated, and should be split off (with its own [proper] justification)? Or wouldn't it then be better to also change the other paths leading here to acquire the per-domain PCI lock? > --- a/xen/arch/x86/hvm/vmx/vmx.c > +++ b/xen/arch/x86/hvm/vmx/vmx.c > @@ -413,7 +413,7 @@ static int cf_check vmx_pi_update_irte(const struct vcpu > *v, > > spin_unlock_irq(&desc->lock); > > - ASSERT(pcidevs_locked()); > + ASSERT(pcidevs_locked() || > rw_is_locked(&msi_desc->dev->domain->pci_lock)); > > return iommu_update_ire_from_msi(msi_desc, &msi_desc->msg); This then falls in the same category. And apparently there are more. Jan
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |