[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH 10/10] vpci/sriov: add support for SR-IOV capability
On Wed, Jun 20, 2018 at 04:42:34PM +0200, Roger Pau Monne wrote: > So that a PCI device that supports SR-IOV (PF) can enable the capability > and use the virtual functions. > > This code is expected to only be used by privileged domains, > unprivileged domains should not get access to the SR-IOV capability. > > The current code detects enabling of the virtual functions feature and > automatically adds the VFs to the domain. It also detects enabling of > memory space and maps the VFs BARs into the domain p2m. Disabling of > the VF enable bit removes the devices and the BAR memory map from the > domain p2m. > > Signed-off-by: Roger Pau Monné <roger.pau@xxxxxxxxxx> [...] > int rc; > @@ -261,12 +273,29 @@ static int modify_bars(const struct pci_dev *pdev, bool > map, bool rom_only) > } > } > > + /* Get the parent dev if it's a VF. */ > + if ( pdev->info.is_virtfn ) > + { > + pcidevs_lock(); > + parent = pci_get_pdev(pdev->seg, pdev->info.physfn.bus, > + pdev->info.physfn.devfn); > + pcidevs_unlock(); > + } > + > /* > * Check for overlaps with other BARs. Note that only BARs that are > * currently mapped (enabled) are checked for overlaps. > */ > list_for_each_entry(tmp, &pdev->domain->arch.pdev_list, domain_list) > { > + /* > + * When mapping the BARs of a VF the parent PF is already locked, > + * trying to lock it will result in a deadlock. This is because > + * vpci_modify_bars is called from the parent PF control_write > register > + * handler. > + */ > + bool lock = parent != tmp; There is spin_lock_recursive. Would that work? > + > if ( tmp == pdev ) > { > /* > @@ -283,10 +312,12 @@ static int modify_bars(const struct pci_dev *pdev, bool > map, bool rom_only) > continue; > } > > - spin_lock(&tmp->vpci_lock); > + if ( lock ) > + spin_lock(&tmp->vpci_lock); > if ( !tmp->vpci ) > { > - spin_unlock(&tmp->vpci_lock); > + if ( lock ) > + spin_unlock(&tmp->vpci_lock); > continue; > } > for ( i = 0; i < ARRAY_SIZE(tmp->vpci->header.bars); i++ ) > @@ -307,14 +338,16 @@ static int modify_bars(const struct pci_dev *pdev, bool > map, bool rom_only) > rc = rangeset_remove_range(mem, start, end); > if ( rc ) > { > - spin_unlock(&tmp->vpci_lock); > + if ( lock ) > + spin_unlock(&tmp->vpci_lock); > printk(XENLOG_G_WARNING "Failed to remove [%lx, %lx]: %d\n", > start, end, rc); > rangeset_destroy(mem); > return rc; > } > } > - spin_unlock(&tmp->vpci_lock); > + if ( lock ) > + spin_unlock(&tmp->vpci_lock); > } > > ASSERT(dev); [...] > +static int init_sriov(struct pci_dev *pdev) > +{ > + unsigned int pos = pci_find_ext_capability(pdev->seg, pdev->bus, > + pdev->devfn, > + PCI_EXT_CAP_ID_SRIOV); > + uint16_t total_vfs; > + > + if ( !pos ) > + return 0; > + > + total_vfs = pci_conf_read16(pdev->seg, pdev->bus, PCI_SLOT(pdev->devfn), > + PCI_FUNC(pdev->devfn), > + pos + PCI_SRIOV_TOTAL_VF); > + > + pdev->vpci->sriov = xzalloc_bytes(SRIOV_SIZE(total_vfs)); > + if ( !pdev->vpci->sriov ) > + return -ENOMEM; > + > + return vpci_add_register(pdev->vpci, vpci_hw_read16, control_write, > + pos + PCI_SRIOV_CTRL, 2, pdev->vpci->sriov); If vpci_add_register fails sriov is going to be leaked? Or eventually that will cause the domain to crash in teardown. I think it would make more sense if init_sriov is able to clean up after itself. MSI and MSI-X code has similar issue. I guess this is fine at the moment because it is Dom0-only. But if we want to expose vpci to DomU eventually we might as well tighten up things now. I will need to re-read SR-IOV spec before I can comment on the rest. Wei. _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxxx https://lists.xenproject.org/mailman/listinfo/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |