[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 2/2] x86/vm_event: Allow returning i-cache for emulation



On Wed, Sep 14, 2016 at 11:58 PM, Jan Beulich <JBeulich@xxxxxxxx> wrote:
>>>> On 14.09.16 at 18:20, <tamas.lengyel@xxxxxxxxxxxx> wrote:
>> On Wed, Sep 14, 2016 at 9:55 AM, Jan Beulich <JBeulich@xxxxxxxx> wrote:
>>>>>> On 13.09.16 at 20:12, <tamas.lengyel@xxxxxxxxxxxx> wrote:
>>>> When emulating instructions the emulator maintains a small i-cache fetched
>>>> from the guest memory. This patch extends the vm_event interface to allow
>>>> returning this i-cache via the vm_event response instead.
>>>
>>> I guess I'm sightly confused: Isn't the purpose to have the monitoring
>>> app _write_ to the cache maintained in Xen? Or else, who is
>>> "emulator" here? If that's the app, then I think subject and description
>>> for hypervisor patches would better be written taking the perspective
>>> of the hypervisor, especially when using potentially ambiguous terms.
>>
>> Well, I thought it was obvious that the built-in emulator in Xen is
>> what this patch is referring to. Otherwise none of this makes sense.
>
> It would have been if the sentence didn't say "returning". The
> internal emulator's cache gets effectively overwritten, not
> returned to anything (unless I'm still misunderstanding something).

It's "returning" the i-cache in the sense that it's part of a vm_event response.

>
>>>> @@ -1944,11 +1951,19 @@ void hvm_mem_access_emulate_one(enum emul_kind 
>>>> kind, unsigned int trapnr,
>>>>      case EMUL_KIND_NOWRITE:
>>>>          rc = hvm_emulate_one_no_write(&ctx);
>>>>          break;
>>>> -    case EMUL_KIND_SET_CONTEXT:
>>>> -        ctx.set_context = 1;
>>>> -        /* Intentional fall-through. */
>>>> -    default:
>>>> +    case EMUL_KIND_SET_CONTEXT_DATA:
>>>> +        ctx.set_context_data = 1;
>>>> +        rc = hvm_emulate_one(&ctx);
>>>> +        break;
>>>> +    case EMUL_KIND_SET_CONTEXT_INSN:
>>>> +        ctx.set_context_insn = 1;
>>>>          rc = hvm_emulate_one(&ctx);
>>>> +        break;
>>>> +    case EMUL_KIND_NORMAL:
>>>> +        rc = hvm_emulate_one(&ctx);
>>>> +        break;
>>>
>>> One of the former two surely can be made fall through into the latter?
>>
>> That's what I did before by turning this into if-else's and you
>> requested to go back to a switch. With a switch though I'll rather
>> make the cases explicit as a fall-through just makes it harder to read
>> for no real benefit.
>
> I disagree.

OK, I don't really care about it too much so if you feel that strongly
about it then fine.

>
>>>> +    default:
>>>> +        return;
>>>
>>> Why don't you retain the previous default handling?
>>
>> The default label is unused and this makes it more apparent when
>> reading the code.
>
> Just like before, imo there shouldn't be any EMUL_KIND_NORMAL
> case; that should be the default label instead.

But there is EMUL_KIND_NORMAL case. All calls to this function must
specify a pre-defined kind. There is no calling this function with
non-defined kinds so the default label is really just
EMUL_KIND_NORMAL. Why is it better to keep it under the default label
then instead of explicitly showing that it's actually just that one
case?

>
>>>> @@ -265,6 +272,10 @@ struct vm_event_emul_read_data {
>>>>      uint8_t  data[sizeof(struct vm_event_regs_x86) - sizeof(uint32_t)];
>>>>  };
>>>>
>>>> +struct vm_event_emul_insn_data {
>>>> +    uint8_t data[16]; /* Has to be completely filled */
>>>> +};
>>>
>>> Any reason for the 16 (rather than the architectural 15) here?
>>
>> Yes, see the definition in include/asm-x86/hvm/emulate.h:
>>
>> struct hvm_emulate_ctxt {
>>     struct x86_emulate_ctxt ctxt;
>>
>>     /* Cache of 16 bytes of instruction. */
>>     uint8_t insn_buf[16];
>
> Ah, I didn't recall we have an oversized cache there too. But such a
> connection would better be documented by a BUILD_BUG_ON()
> comparing the two sizeof()s.

Sure.

Thanks,
Tamas

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.