[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH 1/1] x86/AMD: Fix setup ssss:bb:dd:f for d0 failed
>>> On 07.08.13 at 04:40, <suravee.suthikulpanit@xxxxxxx> wrote: > --- a/xen/drivers/passthrough/amd/pci_amd_iommu.c > +++ b/xen/drivers/passthrough/amd/pci_amd_iommu.c > @@ -175,10 +175,22 @@ static int __init amd_iommu_setup_dom0_device(u8 devfn, > struct pci_dev *pdev) > > if ( unlikely(!iommu) ) > { > - AMD_IOMMU_DEBUG("No iommu for device %04x:%02x:%02x.%u\n", > - pdev->seg, pdev->bus, > - PCI_SLOT(devfn), PCI_FUNC(devfn)); > - return -ENODEV; > + /* Filter the bridge devices */ > + if ( (pdev->type == DEV_TYPE_PCIe_BRIDGE) > + || (pdev->type == DEV_TYPE_PCIe2PCI_BRIDGE) > + || (pdev->type == DEV_TYPE_LEGACY_PCI_BRIDGE) > + || (pdev->type == DEV_TYPE_PCI_HOST_BRIDGE) ) Indentation. > + { > + AMD_IOMMU_DEBUG("Skipping device %04x:%02x:%02x.%u (type %x)\n", > + pdev->seg, PCI_BUS(bdf), PCI_SLOT(bdf), > PCI_FUNC(bdf), > + pdev->type); I can see why host bridges can be skipped, but is this really also true for other bridge types, most importantly legacy ones? Also, please say at least "bridge" here instead of "device", to make matters as clear as possible to people inspecting logs. > + return 0; > + } else { No need for the "else" here; dropping it will reduce the churn the patch causes, ... > + AMD_IOMMU_DEBUG("No iommu for device %04x:%02x:%02x.%u\n", > + pdev->seg, pdev->bus, > + PCI_SLOT(pdev->devfn), PCI_FUNC(pdev->devfn)); > + return -ENODEV; > + } ... as the indentation of this then won't need to change. > @@ -691,35 +690,43 @@ void pci_release_devices(struct domain *d) > spin_unlock(&pcidevs_lock); > } > > -#define PCI_CLASS_BRIDGE_PCI 0x0604 > +#define PCI_CLASS_HOST_PCI_BRIDGE 0x0600 > +#define PCI_CLASS_PCI_PCI_BRIDGE 0x0604 Please don't needlessly introduce names different from their Linux counterparts. > enum pdev_type pdev_type(u16 seg, u8 bus, u8 devfn) > { > - u16 class_device, creg; > + u16 creg; > u8 d = PCI_SLOT(devfn), f = PCI_FUNC(devfn); > - int pos = pci_find_cap_offset(seg, bus, d, f, PCI_CAP_ID_EXP); > + u16 class_device = pci_conf_read16(seg, bus, d, f, PCI_CLASS_DEVICE); > + int cap_offset = pci_find_cap_offset(seg, bus, d, f, PCI_CAP_ID_EXP); > > - class_device = pci_conf_read16(seg, bus, d, f, PCI_CLASS_DEVICE); > switch ( class_device ) > { > - case PCI_CLASS_BRIDGE_PCI: > - if ( !pos ) > + case PCI_CLASS_PCI_PCI_BRIDGE: > + if ( !cap_offset ) > return DEV_TYPE_LEGACY_PCI_BRIDGE; > - creg = pci_conf_read16(seg, bus, d, f, pos + PCI_EXP_FLAGS); > + > + creg = pci_conf_read16(seg, bus, d, f, cap_offset + PCI_EXP_FLAGS); > + > switch ( (creg & PCI_EXP_FLAGS_TYPE) >> 4 ) > { > case PCI_EXP_TYPE_PCI_BRIDGE: > return DEV_TYPE_PCIe2PCI_BRIDGE; > case PCI_EXP_TYPE_PCIE_BRIDGE: > return DEV_TYPE_PCI2PCIe_BRIDGE; > + default: > + return DEV_TYPE_PCIe_BRIDGE; > } > - return DEV_TYPE_PCIe_BRIDGE; > + break; > + > + case PCI_CLASS_HOST_PCI_BRIDGE: > + return DEV_TYPE_PCI_HOST_BRIDGE; All the changes to this function apart from the above two lines look entirely unrelated to the intentions of the patch. Please drop them, or move them to a follow-up cleanup patch. > --- a/xen/drivers/passthrough/vtd/iommu.c > +++ b/xen/drivers/passthrough/vtd/iommu.c > @@ -1448,6 +1448,7 @@ static int domain_context_mapping( > break; > > case DEV_TYPE_PCI: > + case DEV_TYPE_PCI_HOST_BRIDGE: > if ( iommu_verbose ) > dprintk(VTDPREFIX, "d%d:PCI: map %04x:%02x:%02x.%u\n", > domain->domain_id, seg, bus, > @@ -1577,6 +1578,7 @@ static int domain_context_unmap( > break; > > case DEV_TYPE_PCI: > + case DEV_TYPE_PCI_HOST_BRIDGE: > if ( iommu_verbose ) > dprintk(VTDPREFIX, "d%d:PCI: unmap %04x:%02x:%02x.%u\n", > domain->domain_id, seg, bus, PCI_SLOT(devfn), > PCI_FUNC(devfn)); Did you really grep for the uses of DEV_TYPE_PCI? Is see another similar use in xen/drivers/passthrough/vtd/intremap.c which would - afaict - also need similar adjustment. And in any case I'm expecting Xiantao to comment on whether host bridges should be treated any different from normal PCI devices. Jan _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |