[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] [PATCH v3] x86/apicv: Enhance posted-interrupt processing



__vmx_deliver_posted_interrupt() wrongly used a softirq bit to decide whether
to suppress an IPI. Its logic was: the first time an IPI was sent, we set
the softirq bit. Next time, we would check that softirq bit before sending
another IPI. If the 1st IPI arrived at the pCPU which was in
non-root mode, the hardware would consume the IPI and sync PIR to vIRR.
During the process, no one (both hardware and software) will clear the
softirq bit. As a result, the following IPI would be wrongly suppressed.

This patch discards the suppression check, always sending IPI.
The softirq also need to be raised. But there is a little change.
This patch moves the place where we raise a softirq for
'cpu != smp_processor_id()' case to the IPI interrupt handler.
Namely, don't raise a softirq for this case and set the interrupt handler
to pi_notification_interrupt()(in which a softirq is raised) regardless of
posted interrupt enabled or not. The only difference is when the IPI arrives
at the pCPU which is happened in non-root mode, the patch will not raise a
useless softirq since the IPI is consumed by hardware rather than raise a
softirq unconditionally.

Quan doesn't have enough time to upstream this fix patch. He asks me to do
this. Merge another his related patch
(https://lists.xenproject.org/archives/html/xen-devel/2017-02/msg02885.html).

Signed-off-by: Quan Xu <xuquan8@xxxxxxxxxx>
Signed-off-by: Chao Gao <chao.gao@xxxxxxxxx>
---
 xen/arch/x86/hvm/vmx/vmx.c | 56 ++++++++++++++++++++++++++++++++++++++++------
 1 file changed, 49 insertions(+), 7 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index 5b1717d..9db4bd0 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -1842,13 +1842,59 @@ static void __vmx_deliver_posted_interrupt(struct vcpu 
*v)
     bool_t running = v->is_running;
 
     vcpu_unblock(v);
+    /*
+     * The underlying 'IF' excludes two cases which we don't need further
+     * operation to make sure the target vCPU (@v) will sync PIR to vIRR ASAP.
+     * Specifically, the two cases are:
+     * 1. The target vCPU is not running, meaning it is blocked or runnable.
+     * Since we have unblocked the target vCPU above, the target vCPU will
+     * sync PIR to vIRR when it is chosen to run.
+     * 2. The target vCPU is the current vCPU and in_irq() is false. It means
+     * the function is called in noninterrupt context. Since we don't call
+     * the function in noninterrupt context after the last time a vCPU syncs
+     * PIR to vIRR, excluding this case is also safe.
+     */
     if ( running && (in_irq() || (v != current)) )
     {
+        /*
+         * Note: Only two cases will reach here:
+         * 1. The target vCPU is running on other pCPU.
+         * 2. The target vCPU is running on the same pCPU with the current
+         * vCPU and the current vCPU is in interrupt context. That's to say,
+         * the target vCPU is the current vCPU.
+         *
+         * Note2: Don't worry the v->processor may change since at least when
+         * the target vCPU is chosen to run or be blocked, there is a chance
+         * to sync PIR to vIRR.
+         */
         unsigned int cpu = v->processor;
 
-        if ( !test_and_set_bit(VCPU_KICK_SOFTIRQ, &softirq_pending(cpu))
-             && (cpu != smp_processor_id()) )
+        /*
+         * For case 1, we send a IPI to the pCPU. When the IPI arrives, the
+         * target vCPU maybe is running in non-root mode, running in root
+         * mode, runnable or blocked. If the target vCPU is running in
+         * non-root mode, the hardware will sync PIR to vIRR for the IPI
+         * vector is special to the pCPU. If the target vCPU is running in
+         * root-mode, the interrupt handler starts to run. In order to make
+         * sure the target vCPU could go back to vmx_intr_assist(), the
+         * interrupt handler should raise a softirq if no pending softirq.
+         * If the target vCPU is runnable, it will sync PIR to vIRR next time
+         * it is chose to run. In this case, a IPI and a softirq is sent to
+         * a wrong vCPU which we think it is not a big issue. If the target
+         * vCPU is blocked, since vcpu_block() checks whether there is an
+         * event to be delivered through local_events_need_delivery() just
+         * after blocking, the vCPU must have synced PIR to vIRR. Similarly,
+         * there is a IPI and a softirq sent to a wrong vCPU.
+         */
+        if ( cpu != smp_process_id() )
             send_IPI_mask(cpumask_of(cpu), posted_intr_vector);
+        /*
+         * For case 2, raising a softirq can cause vmx_intr_assist() where PIR
+         * has a chance to be synced to vIRR to be called. As an optimization,
+         * We only need to raise a softirq when no pending softirq.
+         */
+        else if ( !softirq_pending(cpu) )
+            raise_softirq(VCPU_KICK_SOFTIRQ);
     }
 }
 
@@ -2281,13 +2327,9 @@ const struct hvm_function_table * __init start_vmx(void)
 
     if ( cpu_has_vmx_posted_intr_processing )
     {
+        alloc_direct_apic_vector(&posted_intr_vector, 
pi_notification_interrupt);
         if ( iommu_intpost )
-        {
-            alloc_direct_apic_vector(&posted_intr_vector, 
pi_notification_interrupt);
             alloc_direct_apic_vector(&pi_wakeup_vector, pi_wakeup_interrupt);
-        }
-        else
-            alloc_direct_apic_vector(&posted_intr_vector, 
event_check_interrupt);
     }
     else
     {
-- 
1.8.3.1


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.