[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH RFC] x86/emulate: implement hvmemul_cmpxchg() with an actual CMPXCHG



On 03/31/2017 06:04 PM, Jan Beulich wrote:
>>>> On 31.03.17 at 17:01, <rcojocaru@xxxxxxxxxxxxxxx> wrote:
>> On 03/31/2017 05:46 PM, Jan Beulich wrote:
>>>>>> On 31.03.17 at 11:56, <rcojocaru@xxxxxxxxxxxxxxx> wrote:
>>>> On 03/31/2017 10:34 AM, Jan Beulich wrote:
>>>>>>>> On 31.03.17 at 08:17, <rcojocaru@xxxxxxxxxxxxxxx> wrote:
>>>>>> On 03/30/2017 06:47 PM, Jan Beulich wrote:
>>>>>>>> Speaking of emulated MMIO, I've got this when the guest was crashing
>>>>>>>> immediately (pre RETRY loop):
>>>>>>>>
>>>>>>>>  MMIO emulation failed: d3v8 32bit @ 0008:82679f3c -> f0 0f ba 30 00 72
>>>>>>>> 07 8b cb e8 da 4b ff ff 8b 45
>>>>>>>
>>>>>>> That's a BTR, which we should be emulating fine. More information
>>>>>>> would need to be collected to have a chance to understand what
>>>>>>> might be going one (first of all the virtual and physical memory
>>>>>>> address this was trying to act on).
>>>>>>
>>>>>> Right, the BTR part should be fine, but I think the LOCK part is what's
>>>>>> causing the issue. I've done a few more test runs to see what return
>>>>>> RETRY (dumping the instruction with an "(r)" prefix to distinguish from
>>>>>> the UNHANDLEABLE dump), and a couple of instructions return RETRY (BTR
>>>>>> and XADD, both LOCK-prefixed, which means they now involve CMPXCHG
>>>>>> handler, which presumably now fails - possibly simply because it's
>>>>>> always LOCKed in my patch):
>>>>>
>>>>> Well, all of that looks to be expected behavior. I'm afraid I don't see
>>>>> how this information helps understanding the MMIO emulation failure
>>>>> above.
>>>>
>>>> I've managed to obtain this log of emulation errors:
>>>> https://pastebin.com/Esy1SkHx 
>>>>
>>>> The "virtual address" lines that are not followed by any "Mem event"
>>>> line correspond to CMXCHG_FAILED return codes.
>>>>
>>>> The very last line is a MMIO emulation failed.
>>>>
>>>> It's probably important that this happens with the model where
>>>> hvm_emulate_one_vm_event() does _not_ re-try the emulation until it
>>>> succeeds. The other model allows me to go further with the guest, but
>>>> eventually I get timeout-related BSODs or the guest becomes unresponsive.
>>>
>>> Interesting. You didn't clarify what the printed "offset" values are,
>>> and it doesn't look like these have any correlation with the underlying
>>> (guest) physical address, which we would also want to see. And then
>>> it strikes me as odd that in these last lines
>>>
>>> (XEN) Mem event (RETRY) emulation failed: d5v8 32bit @ 0008:826bb861 -> f0 
>>> 0f 
>> ba 30 00 72 07 8b cb e8 da 4b ff ff 8b 45
>>> (XEN) virtual address: 0xffd080f0, offset: 4291854576
>>> (XEN) MMIO emulation failed: d5v8 32bit @ 0008:82655f3c -> f0 0f ba 30 00 
>>> 72 
>> 07 8b cb e8 da 4b ff ff 8b 45
>>>
>>> the instruction pointers and virtual addresses are different, but the
>>> code bytes are exactly the same. This doesn't seem very likely, so I
>>> wonder whether there's an issue with us wrongly re-using previously
>>> fetched insn bytes. (Of course I'd be happy to be proven wrong with
>>> this guessing, by you checking the involved binary/ies.)
>>
>> Offset is the actual value of the "offset" parameter of
>> hvmemul_cmpxchg().
> 
> That's not very useful then, as for flat segments "offset" ==
> "virtual address" (i.e. you merely re-print in decimal what you've
> already printed in hex).

The attached patch (a combination of your patch and mine) produces the
following output when booting a Windows 7 32-bit guest with monitoring:
https://pastebin.com/ayiFmj1N

The failed MMIO emulation is caused by a mapping failure due to the
"!nestedhvm_vcpu_in_guestmode(curr) && hvm_mmio_internal(gpa)" condition
being true in hvmemul_vaddr_to_mfn(). I've ripped that off from
__hvm_copy() but it looks like that might not be the right way to use it.


Thanks,
Razvan

Attachment: combined_patches.patch
Description: Text Data

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.