[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] xen: reuse the same pirq allocated when driver load first time



On Wed, May 29, 2013 at 06:50:41PM +0100, Stefano Stabellini wrote:
> On Tue, 21 May 2013, Stefano Stabellini wrote:
> > On Tue, 21 May 2013, Konrad Rzeszutek Wilk wrote:
> > > > Looking at the hypervisor code I couldn't see anything obviously wrong.
> > > 
> > > I think the culprit is "physdev_unmap_pirq":
> > > 
> > >    if ( is_hvm_domain(d) )                                                
> > >      
> > >     {                                                                     
> > >       
> > >         spin_lock(&d->event_lock);                                        
> > >       
> > >         gdprintk(XENLOG_WARNING,"d%d, pirq: %d is %x %s, irq: %d\n",      
> > >       
> > >             d->domain_id, pirq, domain_pirq_to_emuirq(d, pirq),           
> > >       
> > >             domain_pirq_to_emuirq(d, pirq) == IRQ_UNBOUND ? "unbound" : 
> > > "",        
> > >             domain_pirq_to_irq(d, pirq));                                 
> > >       
> > >                                                                           
> > >       
> > >         if ( domain_pirq_to_emuirq(d, pirq) != IRQ_UNBOUND )              
> > >       
> > >             ret = unmap_domain_pirq_emuirq(d, pirq);                      
> > >       
> > >         spin_unlock(&d->event_lock);                                      
> > >       
> > >         if ( domid == DOMID_SELF || ret )                                 
> > >       
> > >             goto free_domain;                                             
> > > 
> > > It always tells me unbound:
> > > 
> > > (XEN) physdev.c:237:d14 14, pirq: 54 is ffffffff
> > > (XEN) irq.c:1873:d14 14, nr_pirqs: 56
> > > (XEN) physdev.c:237:d14 14, pirq: 53 is ffffffff
> > > (XEN) irq.c:1873:d14 14, nr_pirqs: 56
> > > (XEN) physdev.c:237:d14 14, pirq: 52 is ffffffff
> > > (XEN) irq.c:1873:d14 14, nr_pirqs: 56
> > > (XEN) physdev.c:237:d14 14, pirq: 51 is ffffffff
> > > (XEN) irq.c:1873:d14 14, nr_pirqs: 56
> > > (XEN) physdev.c:237:d14 14, pirq: 50 is ffffffff
> > > (XEN) irq.c:1873:d14 14, nr_pirqs: 56
> > > (a bit older debug code, so the 'unbound' does not show up here).
> > > 
> > > Which means that the call to unmap_domain_pirq_emuirq does not happen.
> > > The checks in unmap_domain_pirq_emuirq also look to be depend
> > > on the code being IRQ_UNBOUND.
> > > 
> > > In other words, all of that code looks to only clear things when
> > > they are !IRQ_UNBOUND.
> > > 
> > > But the other logic (IRQ_UNBOUND) looks to be missing a removal
> > > in the radix tree:
> > > 
> > >   if ( emuirq != IRQ_PT )                                                 
> > >     
> > >         radix_tree_delete(&d->arch.hvm_domain.emuirq_pirq, emuirq);       
> > >       
> > >                                                                         
> > > And I think that is what is causing the leak - the radix tree
> > > needs to be pruned? Or perhaps the allocate_pirq should check
> > > the radix tree for IRQ_UNBOUND ones and re-use them?
> > 
> > I think that you are looking in the wrong place.
> > The issue is that QEMU doesn't call pt_msi_disable in
> > pt_msgctrl_reg_write if (!val & PCI_MSI_FLAGS_ENABLE).
> > 
> > The code above is correct as is because it is trying to handle emulated
> > IRQs and MSIs, not real passthrough MSIs. They latter are not added to
> > that radix tree, see physdev_hvm_map_pirq and physdev_map_pirq.
> 
> 
> This patch fixes the issue, I have only tested MSI (MSI-X completely
> untested).

I tested it on my NIC which has MSI and it worked nicely. The other box
with MSI-X is giving me a bit of trouble so will have to wait.

Duan,

Could you - when you get a moment - also try this with the PCI device
that was having issues please?
> 
> 
> diff --git a/hw/pass-through.c b/hw/pass-through.c
> index 304c438..079e465 100644
> --- a/hw/pass-through.c
> +++ b/hw/pass-through.c
> @@ -3866,7 +3866,11 @@ static int pt_msgctrl_reg_write(struct pt_dev *ptdev,
>          ptdev->msi->flags |= PCI_MSI_FLAGS_ENABLE;
>      }
>      else
> -        ptdev->msi->flags &= ~PCI_MSI_FLAGS_ENABLE;
> +    {
> +        if (ptdev->msi->flags & PT_MSI_MAPPED) {
> +            pt_msi_disable(ptdev);
> +        }
> +    }
>  
>      /* pass through MSI_ENABLE bit when no MSI-INTx translation */
>      if (!ptdev->msi_trans_en) {
> @@ -4013,6 +4017,8 @@ static int pt_msixctrl_reg_write(struct pt_dev *ptdev,
>              pt_disable_msi_translate(ptdev);
>          }
>          pt_msix_update(ptdev);
> +    } else if (!(*value & PCI_MSIX_ENABLE) && ptdev->msix->enabled) {
> +        pt_msix_delete(ptdev);
>      }
>  
>      ptdev->msix->enabled = !!(*value & PCI_MSIX_ENABLE);
> diff --git a/hw/pt-msi.c b/hw/pt-msi.c
> index b03b989..65fa7d6 100644
> --- a/hw/pt-msi.c
> +++ b/hw/pt-msi.c
> @@ -213,7 +213,8 @@ void pt_msi_disable(struct pt_dev *dev)
>  
>  out:
>      /* clear msi info */
> -    dev->msi->flags &= ~(MSI_FLAG_UNINIT | PT_MSI_MAPPED | 
> PCI_MSI_FLAGS_ENABLE);
> +    dev->msi->flags &= ~(PT_MSI_MAPPED | PCI_MSI_FLAGS_ENABLE);
> +    dev->msi->flags |= MSI_FLAG_UNINIT;
>      dev->msi->pirq = -1;
>      dev->msi_trans_en = 0;
>  }

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.