[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v5 7/7] VT-d: Fix vt-d Device-TLB flush timeout issue.



> On February 17, 2016 10:41pm, <JBeulich@xxxxxxxx> wrote:
> >>> On 05.02.16 at 11:18, <quan.xu@xxxxxxxxx> wrote:
> > --- a/xen/drivers/passthrough/vtd/qinval.c
> > +++ b/xen/drivers/passthrough/vtd/qinval.c
> > +            if ( pci_hide_device(bus, devfn) )
> 
> But now I'm _really_ puzzled: You acquire the exact lock that
> pci_hide_device() acquires. Hence, unless I've overlooked an earlier change, I
> can't see this as other than an unconditional dead lock. Did you test this 
> code
> path at all?

Sorry, I didn't test this code path.
I did test the follows:
   1) Create domain with ATS device.
   2) Attach / Detach ATS device.

I think I could add a variation of pci_hide_device(), without 
"spin_lock(&pcidevs_lock) / spin_unlock(&pcidevs_lock)"
Or "__init".

But it is sure that different lock state is possible for different call trees 
when to flush an ATS device.
I verify it as follows:
1.print pcidevs_lock status in flush_iotlb_qi()

flush_iotlb_qi()
{
...
+    printk("__ pcidevs_lock : %d *__\n", spin_is_locked(&pcidevs_lock));
...
}

2. attach ATS device
      $xl pci-attach TestDom 0000:81:00.0
  #The print is "(XEN) __ pcidevs_lock : 1 *__"

3. reset memory of domain
      $ xl mem-set TestDom 2047m
  #the print is "(XEN) __ pcidevs_lock : 0 *__"

Quan

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.