[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] IOMMU: don't disable bus mastering on faults for devices used by Xen or Dom0



On 05/11/2012 16:53, "Jan Beulich" <JBeulich@xxxxxxxx> wrote:

> Under the assumption that in these cases recurring faults aren't a
> security issue and it can be expected that the drivers there are going
> to try to take care of the problem.
> 
> Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>

This one's sat a while with no comments...

 -- Keir

> --- a/xen/drivers/passthrough/amd/iommu_init.c
> +++ b/xen/drivers/passthrough/amd/iommu_init.c
> @@ -625,6 +625,18 @@ static void parse_event_log_entry(struct
>          for ( bdf = 0; bdf < ivrs_bdf_entries; bdf++ )
>              if ( get_dma_requestor_id(iommu->seg, bdf) == device_id )
>              {
> +                const struct pci_dev *pdev;
> +
> +                spin_lock(&pcidevs_lock);
> +                pdev = pci_get_pdev(iommu->seg, PCI_BUS(bdf),
> PCI_DEVFN2(bdf));
> +                if ( pdev && pdev->domain != dom_xen &&
> +                     (!pdev->domain || !IS_PRIV(pdev->domain)) )
> +                    pdev = NULL;
> +                spin_unlock(&pcidevs_lock);
> +
> +                if ( pdev )
> +                    continue;
> +
>                  cword = pci_conf_read16(iommu->seg, PCI_BUS(bdf),
>                                          PCI_SLOT(bdf), PCI_FUNC(bdf),
>                                          PCI_COMMAND);
> --- a/xen/drivers/passthrough/vtd/iommu.c
> +++ b/xen/drivers/passthrough/vtd/iommu.c
> @@ -916,7 +916,8 @@ static void __do_iommu_page_fault(struct
>      reg = cap_fault_reg_offset(iommu->cap);
>      while (1)
>      {
> -        u8 fault_reason;
> +        const struct pci_dev *pdev;
> +        u8 fault_reason, bus;
>          u16 source_id, cword;
>          u32 data;
>          u64 guest_addr;
> @@ -950,14 +951,27 @@ static void __do_iommu_page_fault(struct
>          iommu_page_fault_do_one(iommu, type, fault_reason,
>                                  source_id, guest_addr);
>  
> -        /* Tell the device to stop DMAing; we can't rely on the guest to
> -         * control it for us. */
> -        cword = pci_conf_read16(iommu->intel->drhd->segment,
> -                                PCI_BUS(source_id), PCI_SLOT(source_id),
> -                                PCI_FUNC(source_id), PCI_COMMAND);
> -        pci_conf_write16(iommu->intel->drhd->segment, PCI_BUS(source_id),
> -                         PCI_SLOT(source_id), PCI_FUNC(source_id),
> -                         PCI_COMMAND, cword & ~PCI_COMMAND_MASTER);
> +        bus = PCI_BUS(source_id);
> +
> +        spin_lock(&pcidevs_lock);
> +        pdev = pci_get_pdev(iommu->intel->drhd->segment, bus,
> +                            PCI_DEVFN2(source_id));
> +        if ( pdev && pdev->domain != dom_xen &&
> +             (!pdev->domain || !IS_PRIV(pdev->domain)) )
> +            pdev = NULL;
> +        spin_unlock(&pcidevs_lock);
> +
> +        if ( !pdev )
> +        {
> +            /* Tell the device to stop DMAing; we can't rely on the guest to
> +             * control it for us. */
> +            cword = pci_conf_read16(iommu->intel->drhd->segment, bus,
> +                                    PCI_SLOT(source_id), PCI_FUNC(source_id),
> +                                    PCI_COMMAND);
> +            pci_conf_write16(iommu->intel->drhd->segment, bus,
> +                             PCI_SLOT(source_id), PCI_FUNC(source_id),
> +                             PCI_COMMAND, cword & ~PCI_COMMAND_MASTER);
> +        }
>  
>          fault_index++;
>          if ( fault_index > cap_num_fault_regs(iommu->cap) )
> 
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxx
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.