[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] Re: [PATCH] IRQ: manually EOI migrating line interrupts

On 30/08/11 15:38, Keir Fraser wrote:
> On 30/08/2011 15:28, "Andrew Cooper" <andrew.cooper3@xxxxxxxxxx> wrote:
>>> @@ -1739,6 +1739,14 @@ static void end_level_ioapic_irq (unsign
>>>   */
>>>      i = IO_APIC_VECTOR(irq);
>>> +    /* Manually EOI the old vector if we are moving to the new */
>>> +    if ( vector && i != vector )
>>> +    {
>>> +        int ioapic;
>>> +        for (ioapic = 0; ioapic < nr_ioapics; ioapic++)
>>> +            io_apic_eoi(ioapic, i);
>>> +    }
>>> +
> I don't know whether it's worth the effort, but we ought to be able to do
> better than this and send EOI to exactly the correct IO-APIC. I think
> irq=gsi here? And we should know the gsi_base of every IO-APIC, so we can
> work out in fact which pin of which IO-APIC needs clobbering?
>  -- Keir

irq does (or really should) equal gsi.  I had not noticed gsi_base and
gsi_end when making this fix.

io_apic_eoi does not require a pin, but it is using IO-APIC registers
which I can not find references to.  The Local APIC document implies
that you just write the vector to the EOI register, and the IO-APIC will
work out which pin to clear.

However, because Xen currently might assign the same vector to two pins
in the same IO-APIC, changing the code at this point will not fix the
problem - just make it rarer.  Therefore, I would suggest that it is not
worth the effort, as the problem is already very rare, and unlikely to
be a problem with any sane hardware which avoids PCI INTx interrupts
where possible.

P.S.  If anyone knows which manual contains the specificaion/programming
guide for the IO-APIC, I would be very gratefull.  Google always points
to 82093AA datasheet which is very out of date.

Andrew Cooper - Dom0 Kernel Engineer, Citrix XenServer
T: +44 (0)1223 225 900, http://www.citrix.com

Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.