[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] another regression from IRQ handling changes



>>> Keir Fraser <keir.fraser@xxxxxxxxxxxxx> 22.09.09 10:39 >>>
>On 22/09/2009 09:18, "Jan Beulich" <JBeulich@xxxxxxxxxx> wrote:
>
>> An issue I had fixed a little while before your changes now appears to
>> exist again (can't actively verify it due to lack of access to a sufficiently
>> big system): While we now can handle more than 192 interrupt sources,
>> those again are confined to the first 256 IO-APIC pins. We know,
>> however, that there are systems with well over 300 pins (most of which
>> are typically unused, and hence being able to "only" handle 192 interrupt
>> sources doesn't really present a problem on these systems.
>> 
>> Clearly, handling of more than 256 (non-MSI) interrupt sources cannot
>> be done without a kernel side change, since there needs to be a
>> replacement for the 8-bit vector information conveyed through the
>> kernel writes to the IO-APIC redirection table entries. However, up to
>> 256 interrupt sources could easily be handled without kernel side
>> change, by making PHYSDEVOP_alloc_irq_vector return a fake vector
>> (rather than the unmodified irq that got passed in).
>
>If it wasn't broken before, it was probably broken by Xiantao's follow-up
>patch to fix NetBSD dom0 (at least as much as possible, to avoid a nasty
>regression with NetBSD). What we probably need to do is have a 256-entry
>dom0_vector_to_dom0_irq[] array, and allocate an entry from that for every
>fresh irq we see at PHYSDEVOP_alloc_irq_vector. Then when the vector gets
>passed back in on ioapic writes, we index into that array rather than using
>naked rte.vector.
>
>How does that sound?

Yeah, that's what I would view as the solution to get old functionality
back. But my question also extended to possible solutions to get beyond
256 here, especially such that are also acceptable to the pv-ops Dom0,
which I'm much less certain about.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.