[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH v8 01/13] pci: introduce per-domain PCI rwlock
Hi Jan, Jan Beulich <jbeulich@xxxxxxxx> writes: > On 20.07.2023 02:32, Volodymyr Babchuk wrote: >> --- a/xen/drivers/passthrough/amd/pci_amd_iommu.c >> +++ b/xen/drivers/passthrough/amd/pci_amd_iommu.c >> @@ -476,8 +476,13 @@ static int cf_check reassign_device( >> >> if ( devfn == pdev->devfn && pdev->domain != target ) >> { >> - list_move(&pdev->domain_list, &target->pdev_list); >> - pdev->domain = target; >> + write_lock(&pdev->domain->pci_lock); >> + list_del(&pdev->domain_list); >> + write_unlock(&pdev->domain->pci_lock); > > As mentioned on an earlier version, perhaps better (cheaper) to use > "source" here? (Same in VT-d code then.) Sorry, I saw you comment for the previous version, but missed to include this change. It will be done in the next version. >> @@ -747,6 +749,7 @@ int pci_add_device(u16 seg, u8 bus, u8 devfn, >> ret = 0; >> if ( !pdev->domain ) >> { >> + write_lock(&hardware_domain->pci_lock); >> pdev->domain = hardware_domain; >> list_add(&pdev->domain_list, &hardware_domain->pdev_list); >> >> @@ -760,6 +763,7 @@ int pci_add_device(u16 seg, u8 bus, u8 devfn, >> printk(XENLOG_ERR "Setup of vPCI failed: %d\n", ret); >> list_del(&pdev->domain_list); >> pdev->domain = NULL; >> + write_unlock(&hardware_domain->pci_lock); >> goto out; > > In addition to Roger's comments about locking scope: In a case like this > one it would probably also be good to move the printk() out of the locked > area. It can be slow, after all. > > Question is why you have this wide a locked area here in the first place: > Don't you need to hold the lock just across the two list operations (but > not in between)? Strictly speaking yes, we need to hold lock only when operating on the list. For now. Next patch will use the same lock to protect the VPCI (de)alloction, so locked region will be extended anyways. I think, I'll decrease locked area in this patch and increase in the next one, it will be most logical. >> @@ -887,26 +895,62 @@ static int deassign_device(struct domain *d, uint16_t >> seg, uint8_t bus, >> >> int pci_release_devices(struct domain *d) >> { >> - struct pci_dev *pdev, *tmp; >> - u8 bus, devfn; >> - int ret; >> + int combined_ret; >> + LIST_HEAD(failed_pdevs); >> >> pcidevs_lock(); >> - ret = arch_pci_clean_pirqs(d); >> - if ( ret ) >> + write_lock(&d->pci_lock); >> + combined_ret = arch_pci_clean_pirqs(d); >> + if ( combined_ret ) >> { >> pcidevs_unlock(); >> - return ret; >> + write_unlock(&d->pci_lock); >> + return combined_ret; >> } >> - list_for_each_entry_safe ( pdev, tmp, &d->pdev_list, domain_list ) >> + >> + while ( !list_empty(&d->pdev_list) ) >> { >> - bus = pdev->bus; >> - devfn = pdev->devfn; >> - ret = deassign_device(d, pdev->seg, bus, devfn) ?: ret; >> + struct pci_dev *pdev = list_first_entry(&d->pdev_list, >> + struct pci_dev, >> + domain_list); >> + uint16_t seg = pdev->seg; >> + uint8_t bus = pdev->bus; >> + uint8_t devfn = pdev->devfn; >> + int ret; >> + >> + write_unlock(&d->pci_lock); >> + ret = deassign_device(d, seg, bus, devfn); >> + write_lock(&d->pci_lock); >> + if ( ret ) >> + { >> + bool still_present = false; >> + const struct pci_dev *tmp; >> + >> + /* >> + * We need to check if deassign_device() left our pdev in >> + * domain's list. As we dropped the lock, we can't be sure >> + * that list wasn't permutated in some random way, so we >> + * need to traverse the whole list. >> + */ >> + for_each_pdev ( d, tmp ) >> + { >> + if ( tmp == pdev ) >> + { >> + still_present = true; >> + break; >> + } >> + } >> + if ( still_present ) >> + list_move(&pdev->domain_list, &failed_pdevs); > > In order to retain original ordering on the resulting list, perhaps better > list_move_tail()? Yes, thanks. -- WBR, Volodymyr
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |