[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2] x86/hvm: re-work viridian APIC assist code


  • To: David Woodhouse <dwmw2@xxxxxxxxxxxxx>
  • From: Roger Pau Monné <roger.pau@xxxxxxxxxx>
  • Date: Fri, 14 Aug 2020 16:41:26 +0200
  • Authentication-results: esa2.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
  • Cc: <paul@xxxxxxx>, <xen-devel@xxxxxxxxxxxxxxxxxxxx>, 'Eslam Elnikety' <elnikety@xxxxxxxxx>, 'Andrew Cooper' <andrew.cooper3@xxxxxxxxxx>, "'Shan Haitao'" <haitao.shan@xxxxxxxxx>, 'Jan Beulich' <jbeulich@xxxxxxxx>
  • Delivery-date: Fri, 14 Aug 2020 14:41:59 +0000
  • Ironport-sdr: 0rwV0jQmHtZzFWESy+q57kZeD3Z3tEyfds1g/tW+HpA58+xr/4Z8xUVj4ilEKuSCtKecYhsMAn ZpFV6/cH/eXJypxSsNYR+HSyCxJVrxKJwf/QB3X8zHpEv1hHDR9S8+ydHMj1fz0ZoHXyFOPReZ 2rpAm2dHty5u+KLTbR9TFtm9/E2anrOpxNwSXZS0MTKexWbnw7XLcCCXYjjjyN/UgBI4/C3hSr 9YFQKJJ1B6Fe/pObleXHgadMK5R9rvQvmo4L7VXPcTGL12j8OtO6C1v3L926qb+YYpcYclpkf7 WBw=
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On Fri, Aug 14, 2020 at 03:13:51PM +0100, David Woodhouse wrote:
> On Thu, 2020-08-13 at 11:45 +0200, Roger Pau Monné wrote:
> > > The loop appears to be there to handle the case where multiple
> > > devices assigned to a domain have MSIs programmed with the same
> > > dest/vector... which seems like an odd thing for a guest to do but I
> > > guess it is at liberty to do it. Does it matter whether they are
> > > maskable or not?
> > 
> > Such configuration would never work properly, as lapic vectors are
> > edge triggered and thus can't be safely shared between devices?
> > 
> > I think the iteration is there in order to get the hvm_pirq_dpci
> > struct that injected that specific vector, so that you can perform the
> > ack if required. Having lapic EOI callbacks should simply this, as you
> > can pass a hvm_pirq_dpci when injecting a vector, and that would be
> > forwarded to the EOI callback, so there should be no need to iterate
> > over the list of hvm_pirq_dpci for a domain.
> 
> If we didn't have the loop — or more to the point if we didn't grab the
> domain-global d->event_lock that protects it — then I wouldn't even
> care about optimising the whole thing away for the modern MSI case.
> 
> It isn't the act of not doing any work in the _hvm_dpci_msi_eoi()
> function that takes the time. It's that domain-global lock, and a
> little bit the retpoline-stalled indirect call from pt_pirq_interate().
> 
> I suppose with Roger's series, we'll still suffer the retpoline stall
> for a callback that ultimately does nothing, but it's nowhere near as
> expensive as the lock.

I think we could ultimately avoid the callback (and the vmexit when
running on Intel with virtual interrupt delivery) by not registering
any callback when injecting a vector that originated from a source
that doesn't require any Ack, the diff below should be applied on top
of my series and I think should remove the execution of a callback
when there's no Ack to perform. Still pretty much a proof of concept
and could do with some further cleanup.

---8<---
diff --git a/xen/arch/x86/hvm/vmsi.c b/xen/arch/x86/hvm/vmsi.c
index e192c4c6da..483c69deb3 100644
--- a/xen/arch/x86/hvm/vmsi.c
+++ b/xen/arch/x86/hvm/vmsi.c
@@ -110,7 +110,7 @@ int vmsi_deliver(struct domain *d, int vector, uint8_t 
dest, uint8_t dest_mode,
                                  trig_mode, NULL, NULL);
 }
 
-void vmsi_deliver_pirq(struct domain *d, const struct hvm_pirq_dpci *pirq_dpci)
+void vmsi_deliver_pirq(struct domain *d, struct hvm_pirq_dpci *pirq_dpci)
 {
     uint32_t flags = pirq_dpci->gmsi.gflags;
     int vector = pirq_dpci->gmsi.gvec;
@@ -118,6 +118,7 @@ void vmsi_deliver_pirq(struct domain *d, const struct 
hvm_pirq_dpci *pirq_dpci)
     bool dest_mode = flags & XEN_DOMCTL_VMSI_X86_DM_MASK;
     uint8_t delivery_mode = MASK_EXTR(flags, XEN_DOMCTL_VMSI_X86_DELIV_MASK);
     bool trig_mode = flags & XEN_DOMCTL_VMSI_X86_TRIG_MASK;
+    struct pirq *pirq = dpci_pirq(pirq_dpci);
 
     HVM_DBG_LOG(DBG_LEVEL_IOAPIC,
                 "msi: dest=%x dest_mode=%x delivery_mode=%x "
@@ -127,7 +128,7 @@ void vmsi_deliver_pirq(struct domain *d, const struct 
hvm_pirq_dpci *pirq_dpci)
     ASSERT(pirq_dpci->flags & HVM_IRQ_DPCI_GUEST_MSI);
 
     vmsi_deliver_callback(d, vector, dest, dest_mode, delivery_mode, trig_mode,
-                          hvm_dpci_msi_eoi, NULL);
+                          pirq->masked ? hvm_dpci_msi_eoi : NULL, pirq_dpci);
 }
 
 /* Return value, -1 : multi-dests, non-negative value: dest_vcpu_id */
diff --git a/xen/drivers/passthrough/io.c b/xen/drivers/passthrough/io.c
index 3793029b29..2a0b7014f2 100644
--- a/xen/drivers/passthrough/io.c
+++ b/xen/drivers/passthrough/io.c
@@ -851,29 +851,6 @@ static void __msi_pirq_eoi(struct hvm_pirq_dpci *pirq_dpci)
     }
 }
 
-static int _hvm_dpci_msi_eoi(struct domain *d,
-                             struct hvm_pirq_dpci *pirq_dpci, void *arg)
-{
-    int vector = (long)arg;
-
-    if ( (pirq_dpci->flags & HVM_IRQ_DPCI_MACH_MSI) &&
-         (pirq_dpci->gmsi.gvec == vector) )
-    {
-        unsigned int dest = MASK_EXTR(pirq_dpci->gmsi.gflags,
-                                      XEN_DOMCTL_VMSI_X86_DEST_ID_MASK);
-        bool dest_mode = pirq_dpci->gmsi.gflags & XEN_DOMCTL_VMSI_X86_DM_MASK;
-
-        if ( vlapic_match_dest(vcpu_vlapic(current), NULL, 0, dest,
-                               dest_mode) )
-        {
-            __msi_pirq_eoi(pirq_dpci);
-            return 1;
-        }
-    }
-
-    return 0;
-}
-
 void hvm_dpci_msi_eoi(struct vcpu *v, unsigned int vector, void *data)
 {
     struct domain *d = v->domain;
@@ -883,7 +860,7 @@ void hvm_dpci_msi_eoi(struct vcpu *v, unsigned int vector, 
void *data)
        return;
 
     spin_lock(&d->event_lock);
-    pt_pirq_iterate(d, _hvm_dpci_msi_eoi, (void *)(long)vector);
+    __msi_pirq_eoi(data);
     spin_unlock(&d->event_lock);
 }
 
diff --git a/xen/include/asm-x86/hvm/hvm.h b/xen/include/asm-x86/hvm/hvm.h
index be0d8b0a4d..c28fbf96f9 100644
--- a/xen/include/asm-x86/hvm/hvm.h
+++ b/xen/include/asm-x86/hvm/hvm.h
@@ -264,7 +264,7 @@ int vmsi_deliver(
     uint8_t dest, uint8_t dest_mode,
     uint8_t delivery_mode, uint8_t trig_mode);
 struct hvm_pirq_dpci;
-void vmsi_deliver_pirq(struct domain *d, const struct hvm_pirq_dpci *);
+void vmsi_deliver_pirq(struct domain *d, struct hvm_pirq_dpci *);
 int hvm_girq_dest_2_vcpu_id(struct domain *d, uint8_t dest, uint8_t dest_mode);
 
 enum hvm_intblk




 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.