[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Future x86 emulator direction



On 15/03/17 07:49, Jan Beulich wrote:
>>>> On 14.03.17 at 22:07, <rcojocaru@xxxxxxxxxxxxxxx> wrote:
>> On 12/14/2016 09:37 AM, Razvan Cojocaru wrote:
>>> On 12/14/2016 09:14 AM, Jan Beulich wrote:
>>>>>>> On 13.12.16 at 23:02, <andrew.cooper3@xxxxxxxxxx> wrote:
>>>>> On 13/12/2016 21:55, Razvan Cojocaru wrote:
>>>>>> On a somewhat related note, it's important to also figure out how best
>>>>>> to avoid emulation races such as the LOCK CMPXCHG issue we've discussed
>>>>>> in the past. Maybe that's also worth taking into consideration at this
>>>>>> early stage.
>>>>> Funny you should ask that.
>>>>>
>>>>> The only possible way to do this safely is to have the emulator map the
>>>>> target frame(s) and execute a locked stub instruction with a memory
>>>>> operand pointing at the mapping.  We have no other way of interacting
>>>>> with the cache coherency fabric.
>>>> Well, that approach is necessary only if one path (vCPU) can write
>>>> to a page, while another one needs emulation. If pages are globally
>>>> write-protected, an approach following the model from Razvan's
>>>> earlier patch (which I have no idea what has become of) would
>>>> seem to suffice.
>>> As previously stated, you've raised performance concerns which seemed to
>>> require a different direction, namely the one Andrew is now suggesting,
>>> which indeed, aside from being somewhat faster is also safer for all
>>> cases (including the one you've mentioned, where one path can write
>>> normally and the other does so via emulation).
>>>
>>> The old patch itself is still alive in the XenServer patch queue, albeit
>>> quite unlikely to be trivial to apply to the current Xen 4.9-unstable
>>> code in its current form:
>>>
>>>
>> https://github.com/xenserver/xen-4.7.pg/blob/master/master/xen-x86-emulate-sy
>>  
>> ncrhonise-LOCKed-instruction-emulation.patch
>>> Again, if you decide that this patch is preferable, I can try to rework
>>> it for the current version of Xen.
>> Sorry to revive this old thread, but I'm still not sure what the
>> upstream solution for this very real problem should be. Should I bring
>> back the old patch that synchronizes LOCKed CMPXCHGs (perhaps with
>> Andrew's kind help, as he's stated that they keep an up-to-date patch
>> that works against staging)? Or are you considering implementing a stub
>> as part of the work being done on the emulator?
> Both are options imo. The stub approach likely would be the long term
> better solution, but carries with it quite a bit of emulator rework, since
> we'd have to completely change the way memory writes get carried
> out: As we'd need to act on the actual (guest) memory location, we'd
> have to do a page walk (or possibly two for an access crossing a page
> boundary) before running the stub, presumably completely replacing
> the ->write() hook. Compared with this making the ->cmpxchg() hook
> work as originally intended seems to be the more straightforward
> solution.

We already need to change how reads and writes happen.  As it currently
stands, accesses which cross a page boundary are not handled correctly,
and will complete a partial read/write on the first page before finding
that the 2nd page takes a pagefault.  (The root of the problem is that
hvm_copy() has dual use; originally as a memcpy(), and later to
implement an individual instructions access.)

The HVM side of the code needs to be altered to work in the same way
that sh_x86_emulate_{write,cmpxchg}() currently uses
sh_emulate_map_dest(), except that the read side needs including as
well.  This important for handling MMIO where reads may have side effects.

Once that is complete, the cmpxchg hook at least should have proper
atomic properties.

The next question is how to go about making all other LOCKed
instructions have atomic properties.  One suggestion was to try and
implement all LOCKed instructions in terms of cmpxchg, but I suspect
that will come with an unreasonably high overhead for introspection when
all vcpus are hitting the same spinlock.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.