[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2] x86/hvm: re-work viridian APIC assist code



On Thu, 2020-08-13 at 11:45 +0200, Roger Pau Monné wrote:
> > The loop appears to be there to handle the case where multiple
> > devices assigned to a domain have MSIs programmed with the same
> > dest/vector... which seems like an odd thing for a guest to do but I
> > guess it is at liberty to do it. Does it matter whether they are
> > maskable or not?
> 
> Such configuration would never work properly, as lapic vectors are
> edge triggered and thus can't be safely shared between devices?
> 
> I think the iteration is there in order to get the hvm_pirq_dpci
> struct that injected that specific vector, so that you can perform the
> ack if required. Having lapic EOI callbacks should simply this, as you
> can pass a hvm_pirq_dpci when injecting a vector, and that would be
> forwarded to the EOI callback, so there should be no need to iterate
> over the list of hvm_pirq_dpci for a domain.

If we didn't have the loop — or more to the point if we didn't grab the
domain-global d->event_lock that protects it — then I wouldn't even
care about optimising the whole thing away for the modern MSI case.

It isn't the act of not doing any work in the _hvm_dpci_msi_eoi()
function that takes the time. It's that domain-global lock, and a
little bit the retpoline-stalled indirect call from pt_pirq_interate().

I suppose with Roger's series, we'll still suffer the retpoline stall
for a callback that ultimately does nothing, but it's nowhere near as
expensive as the lock.

Attachment: smime.p7s
Description: S/MIME cryptographic signature


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.