[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v3 3/6] VMX: Cleanup PI per-cpu blocking list when vcpu is destroyed



On Wed, 2016-08-31 at 11:56 +0800, Feng Wu wrote:
> We should remove the vCPU from the per-cpu blocking list
> if it is going to be destroyed.
> 
> Signed-off-by: Feng Wu <feng.wu@xxxxxxxxx>
> ---
>  xen/arch/x86/hvm/vmx/vmx.c | 1 +
>  1 file changed, 1 insertion(+)
> 
> diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
> index b869728..37fa2f1 100644
> --- a/xen/arch/x86/hvm/vmx/vmx.c
> +++ b/xen/arch/x86/hvm/vmx/vmx.c
> @@ -346,6 +346,7 @@ static void vmx_vcpu_destroy(struct vcpu *v)
>      vmx_destroy_vmcs(v);
>      vpmu_destroy(v);
>      passive_domain_destroy(v);
> +    vmx_pi_blocking_cleanup(v);
>
I'm not too much into VMX, so I may be wrong (in which case, sorry),
but is it safe to call this after vmx_destroy_vmcs() ?

Also (even if it is), we're basically calling and executing the
following (called by vmx_pi_blocking_clanup()):

static void vmx_pi_remove_vcpu_from_blocking_list(struct vcpu *v)
{
    unsigned long flags;
    spinlock_t *pi_blocking_list_lock;
    struct pi_desc *pi_desc = &v->arch.hvm_vmx.pi_desc;

    /*
     * Set 'NV' field back to posted_intr_vector, so the
     * Posted-Interrupts can be delivered to the vCPU when
     * it is running in non-root mode.
     */
    write_atomic(&pi_desc->nv, posted_intr_vector);

    /* The vCPU is not on any blocking list. */
    pi_blocking_list_lock = v->arch.hvm_vmx.pi_blocking.lock;

    /* Prevent the compiler from eliminating the local variable.*/
    smp_rmb();

    if ( pi_blocking_list_lock == NULL )
        return;

    spin_lock_irqsave(pi_blocking_list_lock, flags);

    /*
     * v->arch.hvm_vmx.pi_blocking.lock == NULL here means the vCPU
     * was removed from the blocking list while we are acquiring the lock.
     */
    if ( v->arch.hvm_vmx.pi_blocking.lock != NULL )
    {
        ASSERT(v->arch.hvm_vmx.pi_blocking.lock == pi_blocking_list_lock);
        list_del(&v->arch.hvm_vmx.pi_blocking.list);
        v->arch.hvm_vmx.pi_blocking.lock = NULL;
    }

    spin_unlock_irqrestore(pi_blocking_list_lock, flags);
}

Considering that we're destroying, isn't this too much? Maybe it's not
a big deal, but I'd have expected that all is needed here is something
like:

 if ( v->arch.hvm_vmx.pi_blocking.lock )
 {
     spin_lock_irqsave(..);
     list_del(..);
     spin_unlock_irqrestore(..);
 }

Maybe the resume, list_remove, and cleanup functions need to be broken
up a bit more/better?

Also, as a side note (which I think would be more appropriate as a
comment to patch 1, but bear with me, I'm just back from vacations, I
have a lot of catch up to do, and I'm in hurry! :-P), now that the
function is called vmx_pi_remove_vcpu_from_blocking_list(), this
comment being part of its body sounds a bit weird:

    ...
    /* The vCPU is not on any blocking list. */
    pi_blocking_list_loc
k = v->arch.hvm_vmx.pi_blocking.lock;
    ...

I'd take the chance for rephrasing it.

Regards,
Dario
-- 
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)

Attachment: signature.asc
Description: This is a digitally signed message part

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.