[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] irq_guest_eoi_timer interaction with MSI


  • To: Jan Beulich <jbeulich@xxxxxxxxxx>
  • From: Keir Fraser <keir.fraser@xxxxxxxxxxxxx>
  • Date: Thu, 13 Nov 2008 16:50:40 +0000
  • Cc: xen-devel@xxxxxxxxxxxxxxxxxxx
  • Delivery-date: Thu, 13 Nov 2008 08:51:05 -0800
  • List-id: Xen developer discussion <xen-devel.lists.xensource.com>
  • Thread-index: AclFr/WrNDE9QrGjEd2N7wAX8io7RQ==
  • Thread-topic: [Xen-devel] irq_guest_eoi_timer interaction with MSI

On 13/11/08 16:43, "Jan Beulich" <jbeulich@xxxxxxxxxx> wrote:

> Up to now, MSI didn't require an EOI, and devices that support masking (in
> particular all MSI-X ones) wouldn't generally require an EOI even with the
> patch send earlier. What you propose would make them all require an EOI
> all of the sudden, despite them needing hypervisor assistance only when
> the interrupt got masked.
> 
>> Also I'll add we currently do a hypercall for every level-triggered IO-APIC
>> IRQ, which was really all we supported until recently. Seemed to work well
>> enough performance-wise in that case.

So we'd add a pirq-indexed bitmap to mitigate that. Whether we use
PHYSDEVOP_irq_eoi or EVTCHNOP_unmask, we need a new shared-memory bitmap,
right? Might as well use irq_eoi and index by pirq, I'd say.

> While that may be correct (I doubt anyone measured the throughput
> difference - really, there was nothing to measure in the IO-APIC case as
> there was no alternative to doing an EOI hypercall), I don't view this as a
> valid argument. If we can do with less hypercalls, we should. And this
> especially when using a feature (MSI) the particular goal of which is to
> improve performance. Otherwise, the only reason for having MSI support
> would be for devices that don't support INTA (which presumably aren't
> that many).

More likely it's to reduce pin counts and hence production costs. :-) Still,
indeed, fewer hypercalls is better in general, I would have to agree.

 -- Keir



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.