[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2] sync CPU state upon final domain destruction



>>> On 22.11.17 at 13:39, <JBeulich@xxxxxxxx> wrote:
> See the code comment being added for why we need this.
> 
> This is being placed here to balance between the desire to prevent
> future similar issues (the risk of which would grow if it was put
> further down the call stack, e.g. in vmx_vcpu_destroy()) and the
> intention to limit the performance impact (otherwise it could also go
> into rcu_do_batch(), paralleling the use in do_tasklet_work()).
> 
> Reported-by: Igor Druzhinin <igor.druzhinin@xxxxxxxxxx>
> Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>

I'm sorry, Julien, I did forget to Cc you (for 4.10 inclusion).

> ---
> v2: Move from vmx_vcpu_destroy() to complete_domain_destroy().
> 
> --- a/xen/common/domain.c
> +++ b/xen/common/domain.c
> @@ -794,6 +794,14 @@ static void complete_domain_destroy(stru
>      struct vcpu *v;
>      int i;
>  
> +    /*
> +     * Flush all state for the vCPU previously having run on the current CPU.
> +     * This is in particular relevant for x86 HVM ones on VMX, so that this
> +     * flushing of state won't happen from the TLB flush IPI handler behind
> +     * the back of a vmx_vmcs_enter() / vmx_vmcs_exit() section.
> +     */
> +    sync_local_execstate();
> +
>      for ( i = d->max_vcpus - 1; i >= 0; i-- )
>      {
>          if ( (v = d->vcpu[i]) == NULL )



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.