[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH for-4.13] x86/vmx: always sync PIR to IRR before vmentry



On 18.11.2019 11:16, Roger Pau Monne wrote:
> @@ -1954,48 +1952,28 @@ static void __vmx_deliver_posted_interrupt(struct 
> vcpu *v)
>       * 2. The target vCPU is the current vCPU and we're in non-interrupt
>       * context.
>       */
> -    if ( running && (in_irq() || (v != current)) )
> -    {
> +    if ( vcpu_runnable(v) && v != current )

I'm afraid you need to be more careful with the running vs runnable
distinction here. The comment above here becomes stale with the
change (also wrt the removal of in_irq(), which I'm at least uneasy
about), and the new commentary below also largely says/assumes
"running", not "runnable".

In general I think "runnable" is the more appropriate state to
check for, as "running" is even more likely to change behind our
backs. But of course there are caveats: When observing "running",
v->processor is sufficiently certain to hold the pCPU the vCPU is
running on or has been running on last. For "runnable" that's
less helpful, because by the time you look at v->processor it may
already have changed to whichever the vCPU is (about to be)
running on.

>          /*
> -         * Note: Only two cases will reach here:
> -         * 1. The target vCPU is running on other pCPU.
> -         * 2. The target vCPU is the current vCPU.
> +         * If the vCPU is running on another pCPU send an IPI to the pCPU. 
> When
> +         * the IPI arrives, the target vCPU may be running in non-root mode,
> +         * running in root mode, runnable or blocked. If the target vCPU is
> +         * running in non-root mode, the hardware will sync PIR to vIRR for
> +         * 'posted_intr_vector' is a special vector handled directly by the
> +         * hardware.
>           *
> -         * Note2: Don't worry the v->processor may change. The vCPU being
> -         * moved to another processor is guaranteed to sync PIR to vIRR,
> -         * due to the involved scheduling cycle.
> +         * If the target vCPU is running in root-mode, the interrupt handler
> +         * starts to run.  Considering an IPI may arrive in the window 
> between
> +         * the call to vmx_intr_assist() and interrupts getting disabled, the
> +         * interrupt handler should raise a softirq to ensure events will be
> +         * delivered in time.
>           */
> -        unsigned int cpu = v->processor;
> +        send_IPI_mask(cpumask_of(v->processor), posted_intr_vector);
>  
> -        /*
> -         * For case 1, we send an IPI to the pCPU. When an IPI arrives, the
> -         * target vCPU maybe is running in non-root mode, running in root
> -         * mode, runnable or blocked. If the target vCPU is running in
> -         * non-root mode, the hardware will sync PIR to vIRR for
> -         * 'posted_intr_vector' is special to the pCPU. If the target vCPU is
> -         * running in root-mode, the interrupt handler starts to run.
> -         * Considering an IPI may arrive in the window between the call to
> -         * vmx_intr_assist() and interrupts getting disabled, the interrupt
> -         * handler should raise a softirq to ensure events will be delivered
> -         * in time. If the target vCPU is runnable, it will sync PIR to
> -         * vIRR next time it is chose to run. In this case, a IPI and a
> -         * softirq is sent to a wrong vCPU which will not have any adverse
> -         * effect. If the target vCPU is blocked, since vcpu_block() checks
> -         * whether there is an event to be delivered through
> -         * local_events_need_delivery() just after blocking, the vCPU must
> -         * have synced PIR to vIRR. Similarly, there is a IPI and a softirq
> -         * sent to a wrong vCPU.
> -         */
> -        if ( cpu != smp_processor_id() )
> -            send_IPI_mask(cpumask_of(cpu), posted_intr_vector);
> -        /*
> -         * For case 2, raising a softirq ensures PIR will be synced to vIRR.
> -         * As any softirq will do, as an optimization we only raise one if
> -         * none is pending already.
> -         */
> -        else if ( !softirq_pending(cpu) )
> -            raise_softirq(VCPU_KICK_SOFTIRQ);
> -    }
> +    /*
> +     * If the vCPU is not runnable or if it's the one currently running in 
> this
> +     * pCPU there's nothing to do, the vmentry code will already sync the PIR
> +     * to IRR when resuming execution.
> +     */
>  }

Just for my own understanding - the "already" here relates to the code
addition you make to vmx_intr_assist()?

And then - is this true even for an interrupt hitting between
vmx_intr_assist() returning and the subsequent CLI in
vmx_asm_vmexit_handler()?

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.