[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH RFC] x86/emulate: implement hvmemul_cmpxchg() with an actual CMPXCHG



>>> On 30.03.17 at 17:05, <rcojocaru@xxxxxxxxxxxxxxx> wrote:
> On 03/30/2017 05:21 PM, Jan Beulich wrote:
>>>>> On 30.03.17 at 16:08, <rcojocaru@xxxxxxxxxxxxxxx> wrote:
>>> On 03/30/2017 03:56 PM, Razvan Cojocaru wrote:
>>>> On 03/30/2017 03:05 PM, Jan Beulich wrote:
>>>>> What I do note though is that you don't copy back the value
>>>>> __cmpxchg() returns, yet that's what is needed. *map may
>>>>> have changed again already.
>>>>
>>>> Changing the code to:
>>>>
>>>> 1162     ret = __cmpxchg(map, old, new, bytes);
>>>> 1163
>>>> 1164     if ( ret != old )
>>>> 1165     {
>>>> 1166         memcpy(p_old, &ret, bytes);
>>>> 1167         rc = X86EMUL_CMPXCHG_FAILED;
>>>> 1168     }
>>>>
>>>> where ret is an unsigned long still triggers BSODs when I add my patch
>>>> to yours. I'll need to dig deeper.
>>>
>>> Nevermind, I've found the culprit: hvm_emulate_one_vm_event()'s code
>>> needs to be wrapped in a loop that checks for X86EMUL_RETRY again, since
>>> hvmemul_cmpxchg() may return RETRY even for some mapping problems, in
>>> which case we again end up with the guest trying to re-execute an
>>> emulable CMPXCHG.
>> 
>> This seems wrong to me - note how my patch changes behavior
>> regarding the return value from paging_cmpxchg_guest_entry()
>> in ptwr_emulated_update().
>> 
>>> However, this gets me back to my original problem when I "solved" it in
>>> the same manner (looping until emulation succeeds) back when
>>> hvmemul_cmpxchg() failures were reported with RETRY: eventually the
>>> guest BSODs with code 101. RETRY failures are still possible coming from
>>> the hvmemul_vaddr_to_mfn() code in my patch.
>>>
>>> I wonder if I should just return X86EMUL_CMPXCHG_FAILED for all those as
>>> well and just never end up returning RETRY from hvmemul_cmpxchg().
>> 
>> That would seem similarly wrong to me - it ought to be
>> UNHANDLEABLE, I would think. In any event, never returning
>> RETRY would also be wrong in case you hit emulated MMIO, but
>> that's really the only case where RETRY should be passed back.
> 
> Sorry, I don't follow: hvm_emulate_one_vm_event() calls
> hvm_emulate_one() and then does this with the return value:
> 
> switch ( rc )
> {
> case X86EMUL_RETRY:
>     /*
>      * This function is called when handling an EPT-related vm_event
>      * reply. As such, nothing else needs to be done here, since simply
>      * returning makes the current instruction cause a page fault again,
>      * consistent with X86EMUL_RETRY.
>      */
>     return;
> case X86EMUL_UNHANDLEABLE:
>     hvm_dump_emulation_state(XENLOG_G_DEBUG "Mem event", &ctx);
>     hvm_inject_hw_exception(trapnr, errcode);
>     break;
> case X86EMUL_EXCEPTION:
>     if ( ctx.ctxt.event_pending )
>         hvm_inject_event(&ctx.ctxt.event);
>     break;
> }
> 
> hvm_emulate_writeback(&ctx);
> 
> If I'd return UNHANDLEABLE from hvmemul_cmpxchg() where I'm now
> returning RETRY from hvmemul_vaddr_to_mfn(), that would inject
> TRAP_invalid_op in the guest, so it seems rather harsh.

Well, my comment was for normal execution of the guest. In
oder to make it into the emulator, the virtual->physical
translation in the guest must have worked, so if there is a
mapping failure, this indicates something fishy going on inside
the guest.

> Speaking of emulated MMIO, I've got this when the guest was crashing
> immediately (pre RETRY loop):
> 
>  MMIO emulation failed: d3v8 32bit @ 0008:82679f3c -> f0 0f ba 30 00 72
> 07 8b cb e8 da 4b ff ff 8b 45

That's a BTR, which we should be emulating fine. More information
would need to be collected to have a chance to understand what
might be going one (first of all the virtual and physical memory
address this was trying to act on).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.