[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v3 11/24] x86/emul: Implement singlestep as a retire flag



On 01/12/16 11:16, Jan Beulich wrote:
>>>> On 30.11.16 at 14:50, <andrew.cooper3@xxxxxxxxxx> wrote:
>> The behaviour of singlestep is to raise #DB after the instruction has been
>> completed, but implementing it with inject_hw_exception() causes 
>> x86_emulate()
>> to return X86EMUL_EXCEPTION, despite succesfully completing execution of the
>> instruction, including register writeback.
> Nice, I think that'll help simplify the privop patch a bit.
>
>> --- a/xen/arch/x86/mm/shadow/multi.c
>> +++ b/xen/arch/x86/mm/shadow/multi.c
>> @@ -3422,6 +3422,16 @@ static int sh_page_fault(struct vcpu *v,
>>          v->arch.paging.last_write_emul_ok = 0;
>>  #endif
>>  
>> +    if ( emul_ctxt.ctxt.retire.singlestep )
>> +    {
>> +        if ( is_hvm_vcpu(v) )
>> +            hvm_inject_hw_exception(TRAP_debug, X86_EVENT_NO_EC);
>> +        else
>> +            pv_inject_hw_exception(TRAP_debug, X86_EVENT_NO_EC);
>> +
>> +        goto emulate_done;
> This results in skipping the PAE special code (which I think is intended)

Correct

> but also the trace_shadow_emulate(), which I don't think is wanted.

Hmm.  It is only the PAE case we want to skip.  Perhaps changing the PAE
entry condition to

diff --git a/xen/arch/x86/mm/shadow/multi.c b/xen/arch/x86/mm/shadow/multi.c
index c45d260..28ff945 100644
--- a/xen/arch/x86/mm/shadow/multi.c
+++ b/xen/arch/x86/mm/shadow/multi.c
@@ -3480,7 +3480,7 @@ static int sh_page_fault(struct vcpu *v,
     }
 
 #if GUEST_PAGING_LEVELS == 3 /* PAE guest */
-    if ( r == X86EMUL_OKAY ) {
+    if ( r == X86EMUL_OKAY && !emul_ctxt.ctxt.retire.raw ) {
         int i, emulation_count=0;
         this_cpu(trace_emulate_initial_va) = va;
         /* Emulate up to four extra instructions in the hope of catching

would be better, along with suitable comments and style fixes?

>
>> @@ -3433,7 +3443,7 @@ static int sh_page_fault(struct vcpu *v,
>>              shadow_continue_emulation(&emul_ctxt, regs);
>>              v->arch.paging.last_write_was_pt = 0;
>>              r = x86_emulate(&emul_ctxt.ctxt, emul_ops);
>> -            if ( r == X86EMUL_OKAY )
>> +            if ( r == X86EMUL_OKAY && !emul_ctxt.ctxt.retire.raw )
> I think this wants to have a comment attached explaining why
> a blanket check of all current and future retire flags here is the
> right thing (or benign).

Ok.

>
>> @@ -3449,6 +3459,15 @@ static int sh_page_fault(struct vcpu *v,
>>              {
>>                  perfc_incr(shadow_em_ex_fail);
>>                  TRACE_SHADOW_PATH_FLAG(TRCE_SFLAG_EMULATION_LAST_FAILED);
>> +
>> +                if ( emul_ctxt.ctxt.retire.singlestep )
>> +                {
>> +                    if ( is_hvm_vcpu(v) )
>> +                        hvm_inject_hw_exception(TRAP_debug, 
>> X86_EVENT_NO_EC);
>> +                    else
>> +                        pv_inject_hw_exception(TRAP_debug, X86_EVENT_NO_EC);
>> +                }
>> +
>>                  break; /* Don't emulate again if we failed! */
> This comment is now slightly stale.

"failed to find the second half of the write".  In combination with a
suitable comment in the hunk above, this should be fine as is.

>
>> @@ -5415,11 +5414,11 @@ x86_emulate(
>>      if ( !mode_64bit() )
>>          _regs.eip = (uint32_t)_regs.eip;
>>  
>> -    *ctxt->regs = _regs;
>> +    /* Was singestepping active at the start of this instruction? */
>> +    if ( (rc == X86EMUL_OKAY) && (ctxt->regs->eflags & EFLG_TF) )
>> +        ctxt->retire.singlestep = true;
> Don't we need to avoid doing this when mov_ss is set? Or does the
> hardware perhaps do the necessary deferring for us?

I am currently reading up about this in the manual.  I need more coffee.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.