[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [Xen-devel] Multiple IRQ's in HVM for Windows

  • To: "Keir Fraser" <keir.fraser@xxxxxxxxxxxxx>, <xen-devel@xxxxxxxxxxxxxxxxxxx>
  • From: "James Harper" <james.harper@xxxxxxxxxxxxxxxx>
  • Date: Sat, 27 Dec 2008 21:28:25 +1100
  • Cc:
  • Delivery-date: Sat, 27 Dec 2008 02:28:53 -0800
  • List-id: Xen developer discussion <xen-devel.lists.xensource.com>
  • Thread-topic: [Xen-devel] Multiple IRQ's in HVM for Windows

> On 27/12/2008 10:03, "James Harper" <james.harper@xxxxxxxxxxxxxxxx>
> > The driver for the xen platform PCI device is a 'bus driver' under
> > windows, and enumerates child devices. When it enumerates a child
> > device, I can say 'and allocate me an interrupt line'.
> So these child devices don't have to have a physical manifestation in
> space? And you can really request an arbitrary IRQ and then you are
> expected
> to plumb it through? That sounds weird, but actually quite helpful for
> Probably we'd implement it with an hvm_op to associate an event
> with
> an IO-APIC pin or a local APIC vector. If implemented as wire-OR into
> set
> of IO-APIC pins, we'd need logic to deassert wires when event channels
> become not-pending, before the wire gets resampled by the PIC/IO-APIC.
> It's
> all easier if we can directly deliver to a LAPIC vector because those
> inherently edge-triggered / message-based (which I think is what we
> want; although it's more complicated if we need to be able to share a
> LAPIC vector among several event channels without 'losing' edges).

Well... the 'old' way would probably still have to work (or would it?),
so we could just keep allocating IRQ's until we run out and any leftover
devices just have to use the old way.

I've mentioned the possibility of using MSI before... would that work?
I'm not yet sure if they are supported across all windows versions, but
we get lots more 'interrupt channels'...


Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.