[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2] x86emul: add "unblock NMI" retire flag



On 12/04/2017 08:43, Jan Beulich wrote:
>>>> On 11.04.17 at 18:09, <andrew.cooper3@xxxxxxxxxx> wrote:
>> On 11/04/17 16:36, Jan Beulich wrote:
>>> --- a/xen/arch/x86/hvm/emulate.c
>>> +++ b/xen/arch/x86/hvm/emulate.c
>>> @@ -1955,25 +1955,32 @@ static int _hvm_emulate_one(struct hvm_e
>>>          memcpy(vio->mmio_insn, hvmemul_ctxt->insn_buf, 
>>> vio->mmio_insn_bytes);
>>>      }
>>>  
>>> -    if ( rc != X86EMUL_OKAY )
>>> -        return rc;
>>> +    new_intr_shadow = hvmemul_ctxt->intr_shadow;
>>>  
>>> -    if ( hvmemul_ctxt->ctxt.retire.singlestep )
>>> -        hvm_inject_hw_exception(TRAP_debug, X86_EVENT_NO_EC);
>>> +    /*
>>> +     * IRET, if valid in the given context, clears NMI blocking
>>> +     * (irrespective of rc).
>>> +     */
>>> +    if ( hvmemul_ctxt->ctxt.retire.unblock_nmi )
>>> +        new_intr_shadow &= ~HVM_INTR_SHADOW_NMI;
>>>  
>>> -    new_intr_shadow = hvmemul_ctxt->intr_shadow;
>>> +    if ( rc == X86EMUL_OKAY )
>>> +    {
>> On further thought, given the assertion, you don't need to introduce
>> this check, and can avoid the block indentation.  It should make the
>> patch rather smaller.
> Hmm, I did consider this, but I feel uneasy doing so, as it leaves
> production builds in potentially bad shape (in case we have a path
> we don't ever execute in any of the routine testing done, but
> which someone nevertheless ends up coming down in a released
> product). Paul, you're the maintainer of the code, do you have an
> opinion either way?

Actually, on yet further thought, the reset nature of these conditions
are actually necessary on the != OKAY path.

Consider "mov %eax, %ss; ud2a".  Even on raising an exception for the
ud2a, the SS shadow needs dropping.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.