[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Multiple IRQ's in HVM for Windows

  • To: James Harper <james.harper@xxxxxxxxxxxxxxxx>, <xen-devel@xxxxxxxxxxxxxxxxxxx>
  • From: Keir Fraser <keir.fraser@xxxxxxxxxxxxx>
  • Date: Fri, 26 Dec 2008 11:00:40 +0000
  • Cc:
  • Delivery-date: Fri, 26 Dec 2008 03:01:16 -0800
  • List-id: Xen developer discussion <xen-devel.lists.xensource.com>
  • Thread-index: AclnCjPkeAYvm5WbTaqM+PmGGNFiHQAIdYQ3AAU6BeAAATbLsQAADYFAAADLT9k=
  • Thread-topic: [Xen-devel] Multiple IRQ's in HVM for Windows

On 26/12/2008 10:54, "James Harper" <james.harper@xxxxxxxxxxxxxxxx> wrote:

> How many interrupts do we have to choose from here? I was able to get
> Windows to use up to (I think) IRQ31.

If going via the virtual PIC, then there are as many interrupts as there are
non-legacy IO-APIC pins. I think currently we have 32 of them.

> As I understand it, most of the 'protocol' we are talking about is based
> around the shared_info_t structure, which appears to make the assumption
> that there is a single point of entry for event delivery into a domain.
> To make use of multiple IRQ's we'd have to check every single event bit
> right? Is that a performance problem.

No, currently we call into HVM interrupt emulation code to assert an IO-APIC
pin when CPU0's event_pending flag is asserted. We could do that also when
other CPU's event_pending flags become asserted, or when individual event
channels become asserted. We just have to hook into event-channel code in a
different place. Or indeed we can just send 'messages' to HVM virtual local
APICs to trigger interrupts on the virtual CPUs directly, without any
integration with the virtual PCI or PIC subsystems.

 -- Keir

Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.