[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2 2/2] x86/vm_event: Allow overwriting Xen's i-cache used for emulation



>>> On 20.09.16 at 17:54, <tamas.lengyel@xxxxxxxxxxxx> wrote:
> On Tue, Sep 20, 2016 at 9:39 AM, Jan Beulich <JBeulich@xxxxxxxx> wrote:
>>>>> On 20.09.16 at 17:14, <tamas.lengyel@xxxxxxxxxxxx> wrote:
>>> On Tue, Sep 20, 2016 at 9:12 AM, Jan Beulich <JBeulich@xxxxxxxx> wrote:
>>>>>>> On 20.09.16 at 16:56, <tamas.lengyel@xxxxxxxxxxxx> wrote:
>>>>> On Tue, Sep 20, 2016 at 1:26 AM, Jan Beulich <JBeulich@xxxxxxxx> wrote:
>>>>>>>>> On 19.09.16 at 20:27, <tamas.lengyel@xxxxxxxxxxxx> wrote:
>>>>>>> On Mon, Sep 19, 2016 at 2:19 AM, Jan Beulich <JBeulich@xxxxxxxx> wrote:
>>>>>>>>>>> On 15.09.16 at 18:51, <tamas.lengyel@xxxxxxxxxxxx> wrote:
>>>>>>>>> @@ -1793,7 +1793,17 @@ static int _hvm_emulate_one(struct 
>>>>>>>>> hvm_emulate_ctxt
>>>>>>> *hvmemul_ctxt,
>>>>>>>>>          pfec |= PFEC_user_mode;
>>>>>>>>>
>>>>>>>>>      hvmemul_ctxt->insn_buf_eip = regs->eip;
>>>>>>>>> -    if ( !vio->mmio_insn_bytes )
>>>>>>>>> +
>>>>>>>>> +    if ( unlikely(hvmemul_ctxt->set_context_insn) && 
>>>>>>>>> curr->arch.vm_event )
>>>>>>>>> +    {
>>>>>>>>> +        BUILD_BUG_ON(sizeof(hvmemul_ctxt->insn_buf_bytes) ==
>>>>>>>>> +                     sizeof(curr->arch.vm_event->emul.insn));
>>>>>>>>
>>>>>>>> This should quite clearly be !=, and I think it builds only because you
>>>>>>>> use the wrong operand in the first sizeof().
>>>>>>>>
>>>>>>>>> +        hvmemul_ctxt->insn_buf_bytes = 
>>>>>>>>> sizeof(curr->arch.vm_event->emul.insn);
>>>>>>>>> +        memcpy(hvmemul_ctxt->insn_buf, 
>>>>>>>>> &curr->arch.vm_event->emul.insn,
>>>>>>>>> +               hvmemul_ctxt->insn_buf_bytes);
>>>>>>>>
>>>>>>>> This memcpy()s between dissimilar types. Please omit the & and
>>>>>>>> properly add .data on the second argument (and this .data
>>>>>>>> addition should then also be mirrored in the BUILD_BUG_ON()).
>>>>>>>>
>>>>>>>>> +    }
>>>>>>>>> +    else if ( !vio->mmio_insn_bytes )
>>>>>>>>
>>>>>>>> And then - I'm sorry for not having thought of this before - I think
>>>>>>>> this would better not live here, or have an effect more explicitly
>>>>>>>> only when coming here through hvm_emulate_one_vm_event().
>>>>>>>> Since the former seems impractical, I think giving _hvm_emulate_one()
>>>>>>>> one or two extra parameters would be the most straightforward
>>>>>>>> approach.
>>>>>>>
>>>>>>> So this is the spot where the mmio insn buffer is getting copied as
>>>>>>> well instead of fetching the instructions from the guest memory. So
>>>>>>> having the vm_event buffer getting copied here too makes the most
>>>>>>> sense. Having the vm_event insn buffer getting copied in somewhere
>>>>>>> else, while the mmio insn buffer getting copied here, IMHO just
>>>>>>> fragments the flow even more making it harder to see what is actually
>>>>>>> happening.
>>>>>>
>>>>>> And I didn't unconditionally ask to move the copying elsewhere.
>>>>>> The alternative - passing the override in as function argument(s),
>>>>>> which would then be NULL/zero for all cases except the VM event
>>>>>> one, would be as suitable. It is in particular ...
>>>>>>
>>>>>>> How about adjusting the if-else here to be:
>>>>>>>
>>>>>>> if ( !vio->mmio_insn_bytes && !hvmemul_ctxt->set_context_insn  )
>>>>>>> ...
>>>>>>> else if ( vio->mmio_insn_bytes )
>>>>>>> ...
>>>>>>> else if ( unlikely(hvmemul_ctxt->set_context_insn) && 
>>>>>>> curr->arch.vm_event )
>>>>>>
>>>>>> ... this curr->arch.vm_event reference which I'd like to see gone
>>>>>> from this specific code path. The ordering in your original patch,
>>>>>> otoh, would then be fine (check for the override first with unlikely(),
>>>>>> else do what is being done today). Such a code structure would
>>>>>> then also ease a possible second way of overriding the insn by
>>>>>> some other party, without having to touch the code here again.
>>>>>
>>>>> So that check is one that Razvan asked to be added. I think it is
>>>>> necessary too as there seems to be a race-condition if vm_event gets
>>>>> shutdown after the response flag is set but before this emulation path
>>>>> takes place. Effectively set_context_insn may be set but the
>>>>> arch.vm_event already gotten freed. Razvan, is that correct?
>>>>
>>>> Well, in case you misunderstood: I didn't ask for the check to be
>>>> _removed_, but for it to be _moved elsewhere_.
>>>>
>>>
>>> So as Razvan pointed out, there is a check already in hvm_do_resume
>>> for exactly the same effect, so then what you are asking for is
>>> already done.
>>
>> Partly - I really meant all curr->arch.vm_event uses to go away from
>> that path. The other part (passing in the override buffer instead of
>> special casing vm-event handling here) still would need to be done.
>>
> 
> I don't really follow what exactly you are looking for. You want the
> buffer to be sent in as an input? We can do that but I mean the mmio
> case doesn't do that either.. And what do you mean not "special casing
> vm_event handling"? We need to handle it in an if-statement because by
> default the buffer is fetched from memory. We don't want to do that,
> just as the mmio case doesn't want that either. So I think if we want
> to be consistent we do what the mmio case is doing, fetching the
> buffer from curr->arch.hvm_vcpu.hvm_io, only we fetch it from
> curr->arch.vm_event.

No. Please look back at my original reply (still visible in context
above). You're comparing apples and oranges - the existing override
is an integral part of the emulation logic, while yours is an add-on.
And btw., see how
https://lists.xenproject.org/archives/html/xen-devel/2016-09/msg00897.html
even factors out that part.

It might even be an option to simply copy your override data right
into vio->mmio_insn{,_bytes}, in the vm-event specific function,
allowing all other code to remain untouched.

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.