[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2 2/2] x86/vm_event: Allow overwriting Xen's i-cache used for emulation



On 09/20/2016 05:56 PM, Tamas K Lengyel wrote:
> On Tue, Sep 20, 2016 at 1:26 AM, Jan Beulich <JBeulich@xxxxxxxx> wrote:
>>>>> On 19.09.16 at 20:27, <tamas.lengyel@xxxxxxxxxxxx> wrote:
>>> On Mon, Sep 19, 2016 at 2:19 AM, Jan Beulich <JBeulich@xxxxxxxx> wrote:
>>>>>>> On 15.09.16 at 18:51, <tamas.lengyel@xxxxxxxxxxxx> wrote:
>>>>> @@ -1793,7 +1793,17 @@ static int _hvm_emulate_one(struct hvm_emulate_ctxt
>>> *hvmemul_ctxt,
>>>>>          pfec |= PFEC_user_mode;
>>>>>
>>>>>      hvmemul_ctxt->insn_buf_eip = regs->eip;
>>>>> -    if ( !vio->mmio_insn_bytes )
>>>>> +
>>>>> +    if ( unlikely(hvmemul_ctxt->set_context_insn) && curr->arch.vm_event 
>>>>> )
>>>>> +    {
>>>>> +        BUILD_BUG_ON(sizeof(hvmemul_ctxt->insn_buf_bytes) ==
>>>>> +                     sizeof(curr->arch.vm_event->emul.insn));
>>>>
>>>> This should quite clearly be !=, and I think it builds only because you
>>>> use the wrong operand in the first sizeof().
>>>>
>>>>> +        hvmemul_ctxt->insn_buf_bytes = 
>>>>> sizeof(curr->arch.vm_event->emul.insn);
>>>>> +        memcpy(hvmemul_ctxt->insn_buf, &curr->arch.vm_event->emul.insn,
>>>>> +               hvmemul_ctxt->insn_buf_bytes);
>>>>
>>>> This memcpy()s between dissimilar types. Please omit the & and
>>>> properly add .data on the second argument (and this .data
>>>> addition should then also be mirrored in the BUILD_BUG_ON()).
>>>>
>>>>> +    }
>>>>> +    else if ( !vio->mmio_insn_bytes )
>>>>
>>>> And then - I'm sorry for not having thought of this before - I think
>>>> this would better not live here, or have an effect more explicitly
>>>> only when coming here through hvm_emulate_one_vm_event().
>>>> Since the former seems impractical, I think giving _hvm_emulate_one()
>>>> one or two extra parameters would be the most straightforward
>>>> approach.
>>>
>>> So this is the spot where the mmio insn buffer is getting copied as
>>> well instead of fetching the instructions from the guest memory. So
>>> having the vm_event buffer getting copied here too makes the most
>>> sense. Having the vm_event insn buffer getting copied in somewhere
>>> else, while the mmio insn buffer getting copied here, IMHO just
>>> fragments the flow even more making it harder to see what is actually
>>> happening.
>>
>> And I didn't unconditionally ask to move the copying elsewhere.
>> The alternative - passing the override in as function argument(s),
>> which would then be NULL/zero for all cases except the VM event
>> one, would be as suitable. It is in particular ...
>>
>>> How about adjusting the if-else here to be:
>>>
>>> if ( !vio->mmio_insn_bytes && !hvmemul_ctxt->set_context_insn  )
>>> ...
>>> else if ( vio->mmio_insn_bytes )
>>> ...
>>> else if ( unlikely(hvmemul_ctxt->set_context_insn) && curr->arch.vm_event )
>>
>> ... this curr->arch.vm_event reference which I'd like to see gone
>> from this specific code path. The ordering in your original patch,
>> otoh, would then be fine (check for the override first with unlikely(),
>> else do what is being done today). Such a code structure would
>> then also ease a possible second way of overriding the insn by
>> some other party, without having to touch the code here again.
>>
> 
> So that check is one that Razvan asked to be added. I think it is
> necessary too as there seems to be a race-condition if vm_event gets
> shutdown after the response flag is set but before this emulation path
> takes place. Effectively set_context_insn may be set but the
> arch.vm_event already gotten freed. Razvan, is that correct?

Yes, that's the general idea, but there's also already a check in
hvm_do_resume() right before calling hvm_mem_access_emulate_one(), so as
long as that's the only code path leading here it should be alright to
remove the check. The check adds extra safety but it's not strictly
necessary here, so if Jan prefers it taken out it should, theoretically,
be fine for now.


Thanks,
Razvan

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.