[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] x86: change IO-APIC ack method default forsingle IO-APIC systems



>>> Keir Fraser <keir.fraser@xxxxxxxxxxxxx> 21.01.09 15:35 >>>
>On 21/01/2009 14:21, "Jan Beulich" <jbeulich@xxxxxxxxxx> wrote:
>
>> "ioapic_ack=old". With that knowledge I think it is now reasonable to
>> include that patch in -unstable, as the introduction of the 'new' ack
>> method was only to address issues with certain chipsets silently
>> setting up alternative IRQ routes when RTEs in secondary IO-APICs got
>> masked.
>
>I don't specifically recall that this issue required two IO-APICs. In fact I
>think it did not. It was actually something to do with the chipset trying to
>distinguish between an OS using 'legacy' routing versus 'mp-bios' routing,
>via a rather distasteful IO-APIC hack. Unfortunately the hack was not that
>uncommon and I don't think those chipsets had more than one IO-APIC.

I'm rather certain that it did involve multiple IO-APICs. What the chipsets
were trying to cover was the ACPI vs. no-ACPI case, since secondary IO-APICs
generally can be (or should I say are being/have been at that time on "certain"
OSes) discovered only with ACPI. Hence when an IRQ normally going to a
secondary IO-APIC's pin go masked in that IO-APIC, a replacement route
was automatically established (and not torn down when the mask bit got
cleared again) to a pin of the primary IO-APIC.

>Overall I think ack_type new has worked quite well. I was actually about to
>remove the old ack_type! (But now I won't ;-) I'm not inclined to take this
>patch though.

If I had an affected system to debug the issue, I'd try to do so (though
remembering how long it took to understand the original issue I'm hesitant
to say so). With the above explanation I hope you may reconsider...

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.