[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH v4 05/12] x86/hvm: allowing registering EOI callbacks for GSIs
On Mon, May 03, 2021 at 05:50:39PM +0200, Jan Beulich wrote: > On 20.04.2021 16:07, Roger Pau Monne wrote: > > Such callbacks will be executed once a EOI is performed by the guest, > > regardless of whether the interrupts are injected from the vIO-APIC or > > the vPIC, as ISA IRQs are translated to GSIs and then the > > corresponding callback is executed at EOI. > > > > The vIO-APIC infrastructure for handling EOIs is build on top of the > > existing vlapic EOI callback functionality, while the vPIC one is > > handled when writing to the vPIC EOI register. > > > > Note that such callbacks need to be registered and de-registered, and > > that a single GSI can have multiple callbacks associated. That's > > because GSIs can be level triggered and shared, as that's the case > > with legacy PCI interrupts shared between several devices. > > > > Strictly speaking this is a non-functional change, since there are no > > users of this new interface introduced by this change. > > > > Signed-off-by: Roger Pau Monné <roger.pau@xxxxxxxxxx> > > In principle, as everything looks functionally correct to me, > Reviewed-by: Jan Beulich <jbeulich@xxxxxxxx> > > Nevertheless, besides a few remarks further down, I have to admit I'm > concerned of the direct-to-indirect calls conversion (not just here, > but also covering earlier patches), which (considering we're talking > of EOI) I expect may occur quite frequently for at least some guests. I would expect the vmexit cost for each EOI would shadow any gain between using direct vs indirect calls. > There aren't that many different callback functions which get > registered, are there? Hence I wonder whether enumerating them and > picking the right one via, say, an enum wouldn't be more efficient > and still allow elimination of (in the case here) unconditional calls > to hvm_dpci_eoi() for every EOI. So for the vlapic (vector) callbacks we have the current consumers: - MSI passthrough. - vPT. - IO-APIC. For GSI callbacks we have: - GSI passthrough. - vPT. I could see about implementing this. This is also kind of blocked on the RTC stuff, since vPT cannot be migrated to this new model unless we remove strict_mode or changfe the logic here to allow GSI callbacks to de-register themselves. > > > --- a/xen/arch/x86/hvm/irq.c > > +++ b/xen/arch/x86/hvm/irq.c > > @@ -595,6 +595,81 @@ int hvm_local_events_need_delivery(struct vcpu *v) > > return !hvm_interrupt_blocked(v, intack); > > } > > > > +int hvm_gsi_register_callback(struct domain *d, unsigned int gsi, > > + struct hvm_gsi_eoi_callback *cb) > > +{ > > + struct hvm_irq *hvm_irq = hvm_domain_irq(d); > > + > > + if ( gsi >= hvm_irq->nr_gsis ) > > + { > > + ASSERT_UNREACHABLE(); > > + return -EINVAL; > > + } > > + > > + write_lock(&hvm_irq->gsi_callbacks_lock); > > + list_add(&cb->list, &hvm_irq->gsi_callbacks[gsi]); > > + write_unlock(&hvm_irq->gsi_callbacks_lock); > > + > > + return 0; > > +} > > + > > +int hvm_gsi_unregister_callback(struct domain *d, unsigned int gsi, > > + struct hvm_gsi_eoi_callback *cb) > > +{ > > + struct hvm_irq *hvm_irq = hvm_domain_irq(d); > > + const struct list_head *tmp; > > + bool found = false; > > + > > + if ( gsi >= hvm_irq->nr_gsis ) > > + { > > + ASSERT_UNREACHABLE(); > > + return -EINVAL; > > + } > > + > > + write_lock(&hvm_irq->gsi_callbacks_lock); > > + list_for_each ( tmp, &hvm_irq->gsi_callbacks[gsi] ) > > + if ( tmp == &cb->list ) > > + { > > + list_del(&cb->list); > > Minor remark: Would passing "tmp" here lead to better generated > code? Maybe? I don't mind doing so. > > @@ -419,13 +421,25 @@ static void eoi_callback(struct vcpu *v, unsigned int > > vector, void *data) > > if ( is_iommu_enabled(d) ) > > { > > spin_unlock(&d->arch.hvm.irq_lock); > > - hvm_dpci_eoi(d, vioapic->base_gsi + pin); > > + hvm_dpci_eoi(d, gsi); > > spin_lock(&d->arch.hvm.irq_lock); > > } > > > > + /* > > + * Callbacks don't expect to be executed with any lock held, so > > + * drop the lock that protects the vIO-APIC fields from > > changing. > > + * > > + * Note that the redirection entry itself cannot go away, so > > upon > > + * retaking the lock we only need to avoid making assumptions > > on > > + * redirection entry field values (ie: recheck the IRR field). > > + */ > > + spin_unlock(&d->arch.hvm.irq_lock); > > + hvm_gsi_execute_callbacks(d, gsi); > > + spin_lock(&d->arch.hvm.irq_lock); > > While this may be transient in the series, as said before I'm not > happy about this double unlock/relock sequence. I didn't really > understand what would be wrong with > > spin_unlock(&d->arch.hvm.irq_lock); > if ( is_iommu_enabled(d) ) > hvm_dpci_eoi(d, gsi); > hvm_gsi_execute_callbacks(d, gsi); > spin_lock(&d->arch.hvm.irq_lock); > > This in particular wouldn't grow but even shrink the later patch > dropping the call to hvm_dpci_eoi(). Sure. > > --- a/xen/arch/x86/hvm/vpic.c > > +++ b/xen/arch/x86/hvm/vpic.c > > @@ -235,6 +235,8 @@ static void vpic_ioport_write( > > unsigned int pin = __scanbit(pending, 8); > > > > ASSERT(pin < 8); > > + hvm_gsi_execute_callbacks(current->domain, > > + hvm_isa_irq_to_gsi((addr >> 7) ? (pin | 8) : pin)); > > hvm_dpci_eoi(current->domain, > > hvm_isa_irq_to_gsi((addr >> 7) ? (pin | 8) : > > pin)); > > __clear_bit(pin, &pending); > > @@ -285,6 +287,8 @@ static void vpic_ioport_write( > > /* Release lock and EOI the physical interrupt (if any). */ > > vpic_update_int_output(vpic); > > vpic_unlock(vpic); > > + hvm_gsi_execute_callbacks(current->domain, > > + hvm_isa_irq_to_gsi((addr >> 7) ? (pin | 8) : pin)); > > hvm_dpci_eoi(current->domain, > > hvm_isa_irq_to_gsi((addr >> 7) ? (pin | 8) : > > pin)); > > return; /* bail immediately */ > > Another presumably minor remark: In the IO-APIC case you insert after > the call to hvm_dpci_eoi(). I wonder if consistency wouldn't help > avoid questions of archeologists in a couple of years time. Hm, sorry, I remember trying to place them in the same order, but likely messed up the order during some rebase. Thanks, Roger.
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |