[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [RFC] x86/vm_event: Allow returning i-cache for emulation



>>> On 09.09.16 at 17:41, <tamas.lengyel@xxxxxxxxxxxx> wrote:
> When emulating instructions the emulator maintains a small i-cache fetched
> from the guest memory. Under certain scenarios this memory region may contain
> instructions that a monitor subscriber would prefer to hide, namely INT3, and
> instead would prefer to emulate a different instruction in-place.
> 
> This patch extends the vm_event interface to allow returning this i-cache via
> the vm_event response.

Please try to explain better what this change is about, perhaps along
the lines of what George used as description, which you then
confirmed.

> @@ -1783,7 +1786,14 @@ static int _hvm_emulate_one(struct hvm_emulate_ctxt 
> *hvmemul_ctxt,
>          pfec |= PFEC_user_mode;
>  
>      hvmemul_ctxt->insn_buf_eip = regs->eip;
> -    if ( !vio->mmio_insn_bytes )
> +
> +    if ( unlikely(hvmemul_ctxt->set_context_insn) )
> +    {
> +        memcpy(hvmemul_ctxt->insn_buf, curr->arch.vm_event->emul_data.data,
> +               curr->arch.vm_event->emul_data.size);
> +        hvmemul_ctxt->insn_buf_bytes = curr->arch.vm_event->emul_data.size;
> +    }
> +    else if ( !vio->mmio_insn_bytes )

I'm not sure about this ordering: Do you really mean to allow an
external entity to override insn bytes e.g. in an MMIO retry, i.e.
allowing the retried insn to be different from the original one?

And additionally I think you need to deal with the case of the
supplied data not being a full insn. There shouldn't be any
fetching from guest memory in that case imo, emulation should
just fail.

> @@ -1927,17 +1937,19 @@ void hvm_mem_access_emulate_one(enum emul_kind kind, 
> unsigned int trapnr,
>      struct hvm_emulate_ctxt ctx = {{ 0 }};
>      int rc;
>  
> +    gdprintk(XENLOG_WARNING, "memaccess emulate one called\n");
> +
>      hvm_emulate_prepare(&ctx, guest_cpu_user_regs());
>  
> -    switch ( kind )
> -    {
> -    case EMUL_KIND_NOWRITE:
> +    if ( kind == EMUL_KIND_NOWRITE )
>          rc = hvm_emulate_one_no_write(&ctx);
> -        break;
> -    case EMUL_KIND_SET_CONTEXT:
> -        ctx.set_context = 1;
> -        /* Intentional fall-through. */
> -    default:
> +    else
> +    {
> +         if ( kind == EMUL_KIND_SET_CONTEXT_DATA )
> +            ctx.set_context_data = 1;
> +         else if ( kind == EMUL_KIND_SET_CONTEXT_INSN )
> +            ctx.set_context_insn = 1;
> +
>          rc = hvm_emulate_one(&ctx);
>      }

Could you try to retain the switch() statement?

> --- a/xen/arch/x86/vm_event.c
> +++ b/xen/arch/x86/vm_event.c
> @@ -66,6 +66,16 @@ void vm_event_toggle_singlestep(struct domain *d, struct 
> vcpu *v)
>      hvm_toggle_singlestep(v);
>  }
>  
> +void vm_event_interrupt_emulate_check(struct vcpu *v, vm_event_response_t 
> *rsp)
> +{
> +    if ( !!(rsp->flags & VM_EVENT_FLAG_EMULATE) &&
> +         !!(rsp->flags & VM_EVENT_FLAG_SET_EMUL_INSN_DATA) )

Please avoid !! when not really needed.

> +    {
> +        v->arch.vm_event->emulate_flags = rsp->flags;
> +        v->arch.vm_event->emul_data = rsp->data.emul_data;
> +    }
> +}

Overall I'm having difficulty associating the function's name with
what it does.

> --- a/xen/common/vm_event.c
> +++ b/xen/common/vm_event.c
> @@ -407,8 +407,11 @@ void vm_event_resume(struct domain *d, struct 
> vm_event_domain *ved)
>              vm_event_register_write_resume(v, &rsp);
>              break;
>  
> +        case VM_EVENT_REASON_SOFTWARE_BREAKPOINT:
> +            vm_event_interrupt_emulate_check(v, &rsp);
> +            break;
> +
>  #ifdef CONFIG_HAS_MEM_ACCESS
> -        case VM_EVENT_REASON_MEM_ACCESS:
>              mem_access_resume(v, &rsp);
>              break;

I don't think you meant to remove that line?

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.