[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH for-4.13 v3 2/2] x86/vmx: always sync PIR to IRR before vmentry



> From: Roger Pau Monne <roger.pau@xxxxxxxxxx>
> Sent: Tuesday, November 26, 2019 9:27 PM
> 
> When using posted interrupts on Intel hardware it's possible that the
> vCPU resumes execution with a stale local APIC IRR register because
> depending on the interrupts to be injected vlapic_has_pending_irq
> might not be called, and thus PIR won't be synced into IRR.
> 
> Fix this by making sure PIR is always synced to IRR in
> hvm_vcpu_has_pending_irq regardless of what interrupts are pending.
> 
> While there also simplify the code in __vmx_deliver_posted_interrupt:
> only raise a softirq if the vCPU is the one currently running and
> __vmx_deliver_posted_interrupt is called from interrupt context. The

as commented earlier, this is what exactly original code does. Then
what is the simplification?

> softirq is raised to make sure vmx_intr_assist is retried if the
> interrupt happens to arrive after vmx_intr_assist but before
> interrupts are disabled in vmx_do_vmentry. Also simplify the logic for
> IPIing other pCPUs, there's no need to check v->processor since the
> IPI should be sent as long as the vCPU is not the current one and it's
> running.
> 
> Reported-by: Joe Jin <joe.jin@xxxxxxxxxx>
> Signed-off-by: Roger Pau Monné <roger.pau@xxxxxxxxxx>
> ---
> Cc: Juergen Gross <jgross@xxxxxxxx>
> ---
> Changes since v2:
>  - Raise a softirq if in interrupt context and the vCPU is the current
>    one.
>  - Use is_running instead of runnable.
>  - Remove the call to vmx_sync_pir_to_irr in vmx_intr_assist and
>    instead always call vlapic_has_pending_irq in
>    hvm_vcpu_has_pending_irq.
> ---
>  xen/arch/x86/hvm/irq.c     |  7 +++--
>  xen/arch/x86/hvm/vmx/vmx.c | 64 +++++++++++---------------------------
>  2 files changed, 24 insertions(+), 47 deletions(-)
> 
> diff --git a/xen/arch/x86/hvm/irq.c b/xen/arch/x86/hvm/irq.c
> index e03a87ad50..b50ac62a16 100644
> --- a/xen/arch/x86/hvm/irq.c
> +++ b/xen/arch/x86/hvm/irq.c
> @@ -515,7 +515,11 @@ void hvm_set_callback_via(struct domain *d,
> uint64_t via)
>  struct hvm_intack hvm_vcpu_has_pending_irq(struct vcpu *v)
>  {
>      struct hvm_domain *plat = &v->domain->arch.hvm;
> -    int vector;
> +    /*
> +     * Always call vlapic_has_pending_irq so that PIR is synced into IRR when
> +     * using posted interrupts.
> +     */
> +    int vector = vlapic_has_pending_irq(v);
> 
>      if ( unlikely(v->nmi_pending) )
>          return hvm_intack_nmi;
> @@ -530,7 +534,6 @@ struct hvm_intack hvm_vcpu_has_pending_irq(struct
> vcpu *v)
>      if ( vlapic_accept_pic_intr(v) && plat->vpic[0].int_output )
>          return hvm_intack_pic(0);
> 
> -    vector = vlapic_has_pending_irq(v);
>      if ( vector != -1 )
>          return hvm_intack_lapic(vector);
> 
> diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
> index c817aec75d..4dea868cda 100644
> --- a/xen/arch/x86/hvm/vmx/vmx.c
> +++ b/xen/arch/x86/hvm/vmx/vmx.c
> @@ -1945,57 +1945,31 @@ static void vmx_process_isr(int isr, struct vcpu
> *v)
> 
>  static void __vmx_deliver_posted_interrupt(struct vcpu *v)
>  {
> -    bool_t running = v->is_running;
> -
>      vcpu_unblock(v);
> -    /*
> -     * Just like vcpu_kick(), nothing is needed for the following two cases:
> -     * 1. The target vCPU is not running, meaning it is blocked or runnable.
> -     * 2. The target vCPU is the current vCPU and we're in non-interrupt
> -     * context.
> -     */
> -    if ( running && (in_irq() || (v != current)) )
> -    {
> +    if ( v->is_running && v != current )
>          /*
> -         * Note: Only two cases will reach here:
> -         * 1. The target vCPU is running on other pCPU.
> -         * 2. The target vCPU is the current vCPU.
> +         * If the vCPU is running on another pCPU send an IPI to the pCPU.
> When
> +         * the IPI arrives, the target vCPU may be running in non-root mode,
> +         * running in root mode, runnable or blocked. If the target vCPU is
> +         * running in non-root mode, the hardware will sync PIR to vIRR for
> +         * 'posted_intr_vector' is a special vector handled directly by the
> +         * hardware.
>           *
> -         * Note2: Don't worry the v->processor may change. The vCPU being
> -         * moved to another processor is guaranteed to sync PIR to vIRR,
> -         * due to the involved scheduling cycle.
> +         * If the target vCPU is running in root-mode, the interrupt handler
> +         * starts to run. Considering an IPI may arrive in the window between
> +         * the call to vmx_intr_assist() and interrupts getting disabled, the
> +         * interrupt handler should raise a softirq to ensure events will be
> +         * delivered in time.

I prefer to original comment which covers all possible conditions that the
target vcpu might be. You may help improve it if some words are not
well-written, but removing useful information is not good there.

>           */
> -        unsigned int cpu = v->processor;
> -
> -        /*
> -         * For case 1, we send an IPI to the pCPU. When an IPI arrives, the
> -         * target vCPU maybe is running in non-root mode, running in root
> -         * mode, runnable or blocked. If the target vCPU is running in
> -         * non-root mode, the hardware will sync PIR to vIRR for
> -         * 'posted_intr_vector' is special to the pCPU. If the target vCPU is
> -         * running in root-mode, the interrupt handler starts to run.
> -         * Considering an IPI may arrive in the window between the call to
> -         * vmx_intr_assist() and interrupts getting disabled, the interrupt
> -         * handler should raise a softirq to ensure events will be delivered
> -         * in time. If the target vCPU is runnable, it will sync PIR to
> -         * vIRR next time it is chose to run. In this case, a IPI and a
> -         * softirq is sent to a wrong vCPU which will not have any adverse
> -         * effect. If the target vCPU is blocked, since vcpu_block() checks
> -         * whether there is an event to be delivered through
> -         * local_events_need_delivery() just after blocking, the vCPU must
> -         * have synced PIR to vIRR. Similarly, there is a IPI and a softirq
> -         * sent to a wrong vCPU.
> -         */
> -        if ( cpu != smp_processor_id() )
> -            send_IPI_mask(cpumask_of(cpu), posted_intr_vector);
> +        send_IPI_mask(cpumask_of(v->processor), posted_intr_vector);
> +    else if ( v == current && in_irq()
> && !softirq_pending(smp_processor_id()) )
>          /*
> -         * For case 2, raising a softirq ensures PIR will be synced to vIRR.
> -         * As any softirq will do, as an optimization we only raise one if
> -         * none is pending already.
> +         * If on interrupt context raise a softirq so that vmx_intr_assist is
> +         * retried in case the interrupt arrives after the call to
> +         * vmx_intr_assist and before interrupts are disabled in
> +         * vmx_do_vmentry.
>           */
> -        else if ( !softirq_pending(cpu) )
> -            raise_softirq(VCPU_KICK_SOFTIRQ);
> -    }
> +        raise_softirq(VCPU_KICK_SOFTIRQ);
>  }
> 
>  static void vmx_deliver_posted_intr(struct vcpu *v, u8 vector)
> --
> 2.24.0

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.