[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [Xen-devel] interrupt affinity question



It should be the same as linux.  Dom0 Linux basically tells xen what value to program into IOAPCI RTE.


From: Agarwal, Lomesh
Sent: Wednesday, October 24, 2007 8:50 PM
To: Kay, Allen M; 'xen-devel@xxxxxxxxxxxxxxxxxxx'
Cc: Han, Weidong
Subject: RE: [Xen-devel] interrupt affinity question

So there is no default interrupt affinity for any physical IRQ in Xen? Is IOAPIC programmed to deliver interrupts in round robin fashion or all interrupts go to one processor only?

 


From: Kay, Allen M
Sent: Wednesday, October 24, 2007 5:22 PM
To: Agarwal, Lomesh; xen-devel@xxxxxxxxxxxxxxxxxxx
Cc: Han, Weidong
Subject: RE: [Xen-devel] interrupt affinity question

 

The dma_msi_* stuff in intel-iommu.c is not related to this.  It looks like an area that needs to be cleaned up a bit.

 

The call to request_irq() is for setting up vt-d fault handler - linking vector with iommu_page_fault().  It is only used when there is a iommu page fault which should not happen if everything is setup correctly.

 

Passthru device interrupt handling is via do_IRQ->do_IRQ_guest->hvm_do_IRQ_dpci path.  The ioapic programming for the passthru device was originally setup by the dom0 pci driver.  The interrupt of the passthru device always gets handled by xen first and then gets re-inject to the guest via virtual ioapic/lapic models.

 

There is a interrupt latency between the point where physical interrupt occurs and the point virtual interrupt interrupt is injected to the guest - especially if guest's vcpu is not running.  We are still investigating on how to lower this latency.

 

Allen 

 


From: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx [mailto:xen-devel-bounces@xxxxxxxxxxxxxxxxxxx] On Behalf Of Agarwal, Lomesh
Sent: Wednesday, October 24, 2007 3:55 PM
To: xen-devel@xxxxxxxxxxxxxxxxxxx
Subject: [Xen-devel] interrupt affinity question

From looking at the code it looks like that interrupt affinity will be set for all physical IRQs and it will be set to the physical processor on which VCPU is running which called request_irq.

Can somebody confirm my understanding?

Pirq_guest_bind (in arch/x86/irq.c) calls set_affinity (which will translate to dma_msi_set_affinity function in  arch/x86/hvm/vmx/vtd/intel-iommu.c for VTd).

So that means if request_irq for NIC interrupt is called when a domain with single VCPU is scheduled on physical CPU 1 then NIC interrupt will be bind to physical CPU 1 and later if the same domain is scheduled to physical CPU 0 it won’t get the interrupt until it does a VMEXIT.

So for lower interrupt latency we are should pin domain VCPU also.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.