[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [for-4.7] x86/emulate: synchronize LOCKed instruction emulation

On 04/14/2016 01:35 PM, David Vrabel wrote:
> On 13/04/16 13:26, Razvan Cojocaru wrote:
>> LOCK-prefixed instructions are currenly allowed to run in parallel
>> in x86_emulate(), which can lead the guest into an undefined state.
>> This patch fixes the issue.
> Is this sufficient?  What if another VCPU is running on another PCPU and
> doing an unemulated LOCK-prefixed instruction to the same memory address?
> This other VCPU could be for another domain (or Xen for that matter).

The patch is only sufficient for parallel runs of emulated instructions,
as previously stated. It is, however, able to prevent nasty guest lockups.

This is what happened in a previous thread where I was hunting down the
issue and initially thought that the xen-access.c model was broken when
used with emulation, and even proceeded to check that the ring buffer
memory accesses are synchronized properly. They were alright, the
problem was in fact LOCKed instruction emulation happening in parallel,
i.e. a race condition there.

This is less obvious if we signal that vm_event responses are available
immediately after processing each one (which greatly reduces the chances
of a race happening), and more obvious with guests that have 2 (or more)
VCPUs where all of them are paused waiting for a vm_event reply, and all
of them are woken up at the same time, after processing all of the
events, and asked to emulate.

I do believe that somewhere in Xen emulating in this manner could occur,
so I hope to make emulation generally safer.

As for not fixing the _whole_ issue, as Jan has rightly pointed out,
that's a rather difficult thing to do.

I will add a comment in the V2 of the patch to clearly state the
limitations of the patch, as well as more information about how the
patch proposes to fix the issue described (as requested by Jan Beulich).


Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.