[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2 1/2] VMX: fix VMCS race on context-switch paths



> From: Jan Beulich [mailto:JBeulich@xxxxxxxx]
> Sent: Thursday, February 16, 2017 8:36 PM
> 
> >>> On 16.02.17 at 13:27, <andrew.cooper3@xxxxxxxxxx> wrote:
> > On 16/02/17 11:15, Jan Beulich wrote:
> >> When __context_switch() is being bypassed during original context
> >> switch handling, the vCPU "owning" the VMCS partially loses control of
> >> it: It will appear non-running to remote CPUs, and hence their attempt
> >> to pause the owning vCPU will have no effect on it (as it already
> >> looks to be paused). At the same time the "owning" CPU will re-enable
> >> interrupts eventually (the lastest when entering the idle loop) and
> >> hence becomes subject to IPIs from other CPUs requesting access to the
> >> VMCS. As a result, when __context_switch() finally gets run, the CPU
> >> may no longer have the VMCS loaded, and hence any accesses to it would
> >> fail. Hence we may need to re-load the VMCS in vmx_ctxt_switch_from().
> >>
> >> Similarly, when __context_switch() is being bypassed also on the second
> >> (switch-in) path, VMCS ownership may have been lost and hence needs
> >> re-establishing. Since there's no existing hook to put this in, add a
> >> new one.
> >>
> >> Reported-by: Kevin Mayer <Kevin.Mayer@xxxxxxxx>
> >> Reported-by: Anshul Makkar <anshul.makkar@xxxxxxxxxx>
> >> Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
> >
> > Reviewed-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
> >
> > Although I would certainly prefer if we can get another round of testing
> > on this series for confidence.
> 
> Sure, I'd certainly like to stick a Tested-by on it. Plus VMX maintainer
> feedback will need waiting for anyway.
> 

logic looks clean to me:

Acked-by: Kevin Tian <kevin.tian@xxxxxxxxx>

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.