[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] Re: Physical and dynamic event channels





On 11/8/06 12:41 am, "Jeremy Fitzhardinge" <jeremy@xxxxxxxx> wrote:

> In the current Xen patches, it reserves a range of 256 irqs for 1:1
> mapping with hardware irqs, and another 256 for dynamically allocated
> event channels.
> 
> Is this really necessary?  256 irqs for hardware interrupts is excessive
> in itself, because there can only be 224.  But aside from that, is it
> necessary to have a 1:1 mapping?  Couldn't they be dynamic as well?  Do
> some subset of "historic" irqs need to be 1:1 mapped, but everything
> else is up for grabs?
> 
> How many event channels do guests end up using anyway?  Would 256 irqs
> be enough general use?

256 would be enough for anyone I think. domain0 is obviously the main user
of event channels but most of those are demuxed to userspace rather than the
kernel IRQ space.

Adding a level of indirection for non-domain-0 wouldn't be too hard. We have
control over which IRQ a PCI device driver thinks it is binding to because
we control the PCI config space via pciback. We need more care for older
devices, but could arrange that by maintaining a 1:1 mapping for legacy PIC
irq range (0 to 15).

Doing the same for domain 0 is probably more 'exciting'. If we could force
the 'vector space' IRQ allocation strategy of PCI_MSI then I think this is
more plausible -- since that effectively gives a level of indirection from
the GSI space. The vectors that Xen currently hands out to domain0 are real
vector numbers but, of course, it would be trivial to add a level of
indirection there, or in the caller.

This all raises an obvious question: what do you think the scope of work for
upstream merging right now should be? Previously it was non-driver domUs
only (and a simplified form at that). Are you thinking about upstreaming
everything, or is this just review and preparation for that effort in the
future?

 -- Keir



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.