[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] RE: another regression from IRQ handling changes



>>> "Zhang, Xiantao" <xiantao.zhang@xxxxxxxxx> 22.09.09 10:56 >>>
>Jan Beulich wrote:
>> An issue I had fixed a little while before your changes now appears to
>> exist again (can't actively verify it due to lack of access to a
>> sufficiently big system): While we now can handle more than 192
>> interrupt sources, 
>> those again are confined to the first 256 IO-APIC pins. We know,
>> however, that there are systems with well over 300 pins (most of which
>> are typically unused, and hence being able to "only" handle 192
>> interrupt sources doesn't really present a problem on these systems.
>> 
>> Clearly, handling of more than 256 (non-MSI) interrupt sources cannot
>> be done without a kernel side change, since there needs to be a
>> replacement for the 8-bit vector information conveyed through the
>> kernel writes to the IO-APIC redirection table entries. However, up to
>> 256 interrupt sources could easily be handled without kernel side
>> change, by making PHYSDEVOP_alloc_irq_vector return a fake vector
>> (rather than the unmodified irq that got passed in).
>
>Jan, 
>   Are you sure it worked well with more than 256 pins before my IRQ
>changes ?  You may refer to dom0's code(linux-2.6.18.8-xen.hg) and
>the gsi irq number shouldn't be bigger than 256 in any time.  You can
>say this is an old issue we need to address through modifying some
>ABIs between dom0 and hypervisor rather than saying this issue is
>introduced by the IRQ handling changes, I think.    That is to say, Xen
>never works with the big system with more than 256 pins of ioapic at
>any time.  

I can't say anything about the 2.6.18 tree, but I'm certain it worked
with newer Dom0-s, e.g. the forward ported 2.6.27-based ones.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.