[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Legacy PCI interrupt {de}assertion count



On Mon, Apr 03, 2017 at 02:22:36PM +0200, Sander Eikelenboom wrote:
> On 31/03/17 16:38, Konrad Rzeszutek Wilk wrote:
> > On Fri, Mar 31, 2017 at 04:46:27AM -0600, Jan Beulich wrote:
> >>>>> On 31.03.17 at 10:07, <roger.pau@xxxxxxxxxx> wrote:
> >>> On Fri, Mar 31, 2017 at 05:05:44AM +0000, Tian, Kevin wrote:
> >>>>> From: Jan Beulich [mailto:JBeulich@xxxxxxxx]
> >>>>> Sent: Monday, March 27, 2017 4:00 PM
> >>>>>
> >>>>>>>> On 24.03.17 at 17:54, <roger.pau@xxxxxxxxxx> wrote:
> >>>>>> As I understand it, for level triggered legacy PCI interrupts Xen sets
> >>>>>> up a timer in order to perform the EOI if the guest takes too long in
> >>>>>> deasserting the line. This is done in pt_irq_time_out. What I don't
> >>>>>> understand is why this function also does a deassertion of the guest 
> >>>>>> view
> >>>>> of the PCI interrupt, ie:
> >>>>>> why it calls hvm_pci_intx_deassert. This AFAICT will clear the pending
> >>>>>> assert in the guest, and thus the guest will end up loosing one 
> >>>>>> interrupt.
> >>>>>
> >>>>> Especially with the comment next to the respective set_timer() it looks 
> >>>>> to me
> >>>>> as if this was the intended effect: If the guest didn't care to at 
> >>>>> least start
> >>>>> handling the interrupt within PT_IRQ_TIME_OUT, we want it look to be 
> >>>>> lost in
> >>>>> order to not have it block other interrupts inside the guest (i.e. 
> >>>>> there's more
> >>>>> to it than just guarding the host here).
> >>>>>
> >>>>> "Luckily" commit 0f843ba00c ("vt-d: Allow pass-through of shared
> >>>>> interrupts") introducing this has no description at all. Let's see if 
> >>>>> Kevin
> >>>>> remembers any further details ...
> >>>>>
> >>>>
> >>>> Sorry I don't remember more detail other than existing comments.
> >>>> Roger, did you encounter a problem now?
> >>>
> >>> No, I didn't encounter any problems with this so far, any well behaved 
> >>> guest
> >>> will deassert those lines anyway, it just seems to be against the spec.  
> >>> AFAIK
> >>> on bare metal the line will be asserted until the OS deasserts it, so I 
> >>> was
> >>> wondering if this was some kind of workaround?
> >>
> >> "OS deasserts" is a term I don't understand. Aiui it's the origin device
> >> which would need to de-assert its interrupt, and I think it is not
> >> uncommon for devices to de-assert interrupts after a certain amount
> >> of time. If that wasn't the case, spurious interrupts could never occur.
> > 
> > I recall Sander (CC-ed) here hitting this at some point. There was some 
> > device
> > he had (legacy?) that would very much hit this path.
> > 
> > But I can't recall the details, sorry.
> > 
> > Sanders, it was in the context of the dpci softirq work I did if that helps.
> 
> Hi Konrad,
> 
> You mean these ?

Yes, but I can't seem to find in those threads the name of the
device you had - the one that was triggering those legacy
interrupts.. By any chance you recall what it was?

> 
> The issue leading up to this revert for xen-4.5: 
> https://lists.xen.org/archives/html/xen-devel/2015-01/msg01025.html
> 
> Where this seems to be the thread that started the conversation leading up to 
> that revert: 
> https://lists.xenproject.org/archives/html/xen-devel/2014-11/msg01330.html
> 
> Which than for xen-4.6 continued in a thread with the subject "dpci: Put the 
> dpci back on the list if scheduled from another CPU."
> which is spread out over several months, (this is somewhere in between 
> https://lists.xenproject.org/archives/html/xen-devel/2015-03/msg02102.html ).
> 
> --
> Sander
> 
> >>
> >> Jan
> >>
> >>
> >> _______________________________________________
> >> Xen-devel mailing list
> >> Xen-devel@xxxxxxxxxxxxx
> >> https://lists.xen.org/xen-devel
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.