[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v12] x86/emulate: Send vm_event from emulate



On 20.09.2019 16:59, Alexandru Stefan ISAILA wrote:
> 
> 
> On 20.09.2019 17:22, Jan Beulich wrote:
>> On 20.09.2019 14:16, Alexandru Stefan ISAILA wrote:
>>> In order to have __hvm_copy() issue ~X86EMUL_RETRY a new return type,
>>> HVMTRANS_need_retry, was added and all the places that consume HVMTRANS*
>>> and needed adjustment where changed accordingly.
>>
>> This is wrong and hence confusing: __hvm_copy() would never return
>> ~X86EMUL_RETRY. In fact I think you've confused yourself enough to
>> make a questionable (possibly resulting) change:
> 
> The idea was to get X86EMUL_RETRY down the line from __hvm_copy().
> I will adjust this.
> 
>>
>>> @@ -582,7 +583,7 @@ static void *hvmemul_map_linear_addr(
>>>           ASSERT(mfn_x(*mfn) == 0);
>>>   
>>>           res = hvm_translate_get_page(curr, addr, true, pfec,
>>> -                                     &pfinfo, &page, NULL, &p2mt);
>>> +                                     &pfinfo, &page, &gfn, &p2mt);
>>
>> This function ...
>>
>>>           switch ( res )
>>>           {
>>> @@ -601,6 +602,7 @@ static void *hvmemul_map_linear_addr(
>>>   
>>>           case HVMTRANS_gfn_paged_out:
>>>           case HVMTRANS_gfn_shared:
>>> +        case HVMTRANS_need_retry:
>>
>> ... can't return this value, so you should omit this addition,
>> letting the new return value go through "default:".
> 
> It is very clear that HVMTRANS_need_retry will not be returned form that 
> function. At least for now. I thought you wanted to have every possible 
> case covered in the switch. I can remove that case, there is not problem 
> here because, like I've said, it will never enter that case.
> 
> But as you said later work with HVMTRANS_need_retry will result in 
> returning X86EMUL_RETRY. Any way it's your call if I should remove it or 
> not.

The result should be consistent (i.e. between the case here
and the rep_movs / rep_stos cases below). Overall I think it
would be cleanest if in all three cases an ASSERT_UNREACHABLE()
fell through to a "return X86EMUL_RETRY;".

>>> @@ -1852,6 +1864,8 @@ static int hvmemul_rep_movs(
>>>   
>>>       xfree(buf);
>>>   
>>> +    ASSERT(rc != HVMTRANS_need_retry);
>>> +
>>>       if ( rc == HVMTRANS_gfn_paged_out )
>>>           return X86EMUL_RETRY;
>>>       if ( rc == HVMTRANS_gfn_shared )
>>> @@ -1964,6 +1978,8 @@ static int hvmemul_rep_stos(
>>>           if ( buf != p_data )
>>>               xfree(buf);
>>>   
>>> +        ASSERT(rc != HVMTRANS_need_retry);
>>> +
>>>           switch ( rc )
>>>           {
>>>           case HVMTRANS_gfn_paged_out:
>>
>> Looking at this again, I think it would better be an addition to
>> the switch() (using ASSERT_UNREACHABLE()). Generally this is
>> true for the rep_movs case as well, but that one would first
>> need converting to switch(), which I agree is beyond the scope
> 
> I agree that this is beyond the scope of this patch but it's not that 
> big of a change and it can be done.
> 
> But isn't having a default ASSERT_UNREACHABLE(); in both switch cases 
> change the behavior of the function?

It shouldn't be the default case that gains this assertion,
but the HVMTRANS_need_retry one that is to be added.

>> of this change. In both cases a brief comment would seem
>> worthwhile adding, clarifying that the new return value can
>> result from hvm_copy_*_guest_linear() only. This might become
>> relevant in particular if, down the road, we invent more cases
>> where HVMTRANS_need_retry is produced.
> 
> Is this comment aimed for the commit message or another place?

If you go the ASSERT_UNREACHABLE() route, then the comment(s)
should be code comments next to these assertions. They'd be
there to avoid people having to dig out the reason for why
they're there, to make it easy to decide whether it is safe to
drop them once some new producer of HVMTRANS_need_retry would
appear.

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.