[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [Xen-devel] HVM windows - PCI IRQ firing on both CPU's


  • To: "Keir Fraser" <keir.fraser@xxxxxxxxxxxxx>, <xen-devel@xxxxxxxxxxxxxxxxxxx>
  • From: "James Harper" <james.harper@xxxxxxxxxxxxxxxx>
  • Date: Mon, 18 Aug 2008 22:32:02 +1000
  • Cc:
  • Delivery-date: Mon, 18 Aug 2008 05:32:24 -0700
  • List-id: Xen developer discussion <xen-devel.lists.xensource.com>
  • Thread-index: AckBIKaFdOX6QU6hSgaQozLRUZxMrQACj3ysAABk+0AAAE37LgAACjsw
  • Thread-topic: [Xen-devel] HVM windows - PCI IRQ firing on both CPU's

> 
> On 18/8/08 13:19, "James Harper" <james.harper@xxxxxxxxxxxxxxxx>
wrote:
> 
> > Just so I understand, even if I see the IRQ on CPU1, I should always
> > treat it as if it came in on CPU0?
> 
> Yes. Only vcpu0's event-channel logic is wired into the virtual
> PIC/IOAPIC.
> Even if the IOAPIC then forwards the interrupt to a different VCPU,
it's
> still vcpu0's event-channel status that initiated the interrupt. Other
> vcpus' event-channel statuses do not cause interrupts in HVM.
> 

I'm not sure if this is a general or a windows specific question, but I
can approach this in one of two ways...

1. Make sure the interrupt is only ever delivered to CPU0 by specifying
the affinity when I call IoConnectInterrupt
2. Accept the interrupt on any CPU but always use vcpu_info[0] to check
the flags etc.

Does the hypervisor make any scheduling assumptions upon delivering an
event to a domain? (eg does it schedule CPU0 on the basis that that CPU
is going to be handling the event?)

Thanks

James

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.