[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH v6 2/3] xen/pci: introduce PF<->VF links
On 11/12/24 04:39, Jan Beulich wrote: > On 12.11.2024 10:02, Roger Pau Monné wrote: >> On Mon, Nov 11, 2024 at 03:07:28PM -0500, Stewart Hildebrand wrote: >>> On 10/28/24 14:41, Roger Pau Monné wrote: >>>> if ( !pdev->info.is_virtfn && !list_empty(&pdev->vf_list) ) >>>> { >>>> struct pci_dev *vf_pdev; >>>> >>>> while ( (vf_pdev = list_first_entry_or_null(&pdev->vf_list, >>>> struct pci_dev, >>>> vf_list)) != NULL ) >>>> { >>>> list_del(&vf_pdev->vf_list); >>>> vf_pdev->virtfn.pf_pdev = NULL; >>>> vf_pdev->broken = true; >>>> } >>>> >>>> printk(XENLOG_WARNING "PCI SR-IOV PF %pp removed with VFs still >>>> present\n", >>>> &pdev->sbdf); >>>> } >>> >>> Yeah. Given that the consensus is leaning toward keeping the PF and >>> returning an error, here's my suggestion: >>> >>> if ( !pdev->info.is_virtfn && !list_empty(&pdev->vf_list) ) >>> { >>> struct pci_dev *vf_pdev; >>> >>> list_for_each_entry(vf_pdev, &pdev->vf_list, vf_list) >>> vf_pdev->broken = true; >>> >>> pdev->broken = true; >> >> Do you need to mark the devices as broken? My expectation would be >> that returning -EBUSY here should prevent the device from being >> removed, and hence there would be no breakage, just failure to fulfill >> the (possible) hot-unplug request. > > That very much depends on Dom0 kernels then actually respecting the error, > and not considering the underlying hypercall a mere notification. All dom0 Linux does is print a warning: # echo 1 > /sys/bus/pci/devices/0000\:01\:00.0/sriov_numvfs # echo 1 > /sys/bus/pci/devices/0000\:01\:00.0/remove [ 56.738750] 0000:01:00.0: driver left SR-IOV enabled after remove (XEN) Attempted to remove PCI SR-IOV PF 0000:01:00.0 with VFs still present [ 56.749904] pci 0000:01:00.0: Failed to delete - passthrough or MSI/MSI-X might fail! # echo $? 0 Subsequently, lspci reveals no entry for 0000:01:00.0. I think it's appropriate to mark them broken.
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |