[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [Xen-devel] Multiple IRQ's in HVM for Windows



> I'm not yet sure if they are supported across all windows versions

I believe MSI is supported on Vista and later for workstation and server
2003 and later.

-----Original Message-----
From: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
[mailto:xen-devel-bounces@xxxxxxxxxxxxxxxxxxx] On Behalf Of James Harper
Sent: Saturday, December 27, 2008 5:28 AM
To: Keir Fraser; xen-devel@xxxxxxxxxxxxxxxxxxx
Subject: RE: [Xen-devel] Multiple IRQ's in HVM for Windows

> On 27/12/2008 10:03, "James Harper" <james.harper@xxxxxxxxxxxxxxxx>
wrote:
> 
> > The driver for the xen platform PCI device is a 'bus driver' under
> > windows, and enumerates child devices. When it enumerates a child
> > device, I can say 'and allocate me an interrupt line'.
> 
> So these child devices don't have to have a physical manifestation in
PCI
> space? And you can really request an arbitrary IRQ and then you are
> expected
> to plumb it through? That sounds weird, but actually quite helpful for
us.
> 
> Probably we'd implement it with an hvm_op to associate an event
channel
> with
> an IO-APIC pin or a local APIC vector. If implemented as wire-OR into
a
> set
> of IO-APIC pins, we'd need logic to deassert wires when event channels
> become not-pending, before the wire gets resampled by the PIC/IO-APIC.
> It's
> all easier if we can directly deliver to a LAPIC vector because those
are
> inherently edge-triggered / message-based (which I think is what we
really
> want; although it's more complicated if we need to be able to share a
> LAPIC vector among several event channels without 'losing' edges).
> 

Well... the 'old' way would probably still have to work (or would it?),
so we could just keep allocating IRQ's until we run out and any leftover
devices just have to use the old way.

I've mentioned the possibility of using MSI before... would that work?
I'm not yet sure if they are supported across all windows versions, but
we get lots more 'interrupt channels'...

James


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.