[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v4 05/10] xen/arm: optee: add std call handling



Julien Grall writes:

> On 20/03/2019 16:14, Volodymyr Babchuk wrote:
>>
>> Hi Julien,
>
> Hi Volodymyr,
>
>> Julien Grall writes:
>>
>> [...]
>>>>        struct dt_device_node *node;
>>>> @@ -48,12 +82,25 @@ static bool optee_probe(void)
>>>>             (uint32_t)resp.a3 != OPTEE_MSG_UID_3 )
>>>>            return false;
>>>>    +    /* Read number of threads */
>>>> +    arm_smccc_smc(OPTEE_SMC_GET_CONFIG, OPTEE_SMC_CONFIG_NUM_THREADS, 
>>>> &resp);
>>>> +    if ( resp.a0 == OPTEE_SMC_RETURN_OK )
>>>
>>> Out of interest, when was this call added?
>>
>> It is on review right now. We have achieved agreement on that this call
>> is needed. I believe this will be merged into OP-TEE before I'll send v5
>> of this series.
>
> Please mention it after --- so we don't forget to check the state of
> the new call before merging to Xen.
Okay, sure

>
>>
>>>> +    {
>>>> +        max_optee_threads = resp.a1;
>>>> +        printk(XENLOG_DEBUG "OP-TEE supports %u threads.\n", 
>>>> max_optee_threads);
>>>
>>> extra NIT: I would use XENLOG_INFO rather than XENLOG_DEBUG. This is a
>>> useful information to have in bug report.
>>>
>>> Regarding the message, what matters is the number of threads for
>>> guest. So I would rework it to make it more meaning full for a user
>>> that does not know the internal.
>>>
>>> You might also want to move the message out of the if. So you have the
>>> message even when OP-TEE does not support the SMC.
>> In that other case I don't know how much treads OP-TEE really
>> supports... I think, I will need to rephrase that message. Or, better,
>> I'll add another message like "Suggesting that OP-TEE supports %d
>> threads per guest".
>
> Was there any OP-TEE released containing your Virtualization patches?
No, there was no releases with virtualization support.

> Depending on the answer, I would consider to mandate
> OPTEE_SMC_CONFIG_NUM_THREADS so we don't have to deal with a default
> value in Xen.
Makes sense, I think.

> Also, should we expose this call to the guest as well?
Yes, this a good point.

[...]

>>>> +static void put_std_call(struct optee_domain *ctx, struct optee_std_call 
>>>> *call)
>>>> +{
>>>> +    spin_lock(&ctx->lock);
>>>> +    ASSERT(call->in_flight);
>>>> +    unmap_xen_arg(call);
>>>
>>> Same question for the unmap.
>> Yeah, in normal circumstances guest should not try to resume call on
>> another vCPU, because we didn't returned from the original call on
>> current vCPU. But what if gust will try to do this? There are chances,
>> that current CPU will unmap buffer that was mapped by other CPU an
>> instance ago.
>
> Wasn't it the point to have the in_flight field? As soon as you set
> in_flight to true, then only one CPU can have the optee_std_call
> structure in hand.
>
> So, as long as you map/unmap within the section protected by
> "in_flight", you protected against any race. The lock is only here to
> protect the field in_flight and the list. Did I miss anything?

This is partially true. I can call map_xen_arg() after spin_unlock(), yes.

But then I need to call unmap_xen_arg() before spin_lock() in
put_std_call() function. If there were no ASSERT() that would be okay. But with
ASSERT() it is looking weird: firstly we are unmapping buffer and the we are
asserting that we had a right to do this. This is sort of "use before check"

And obviously, I can't call ASSERT() without holding the the spinlock.

On other hand, ASSERT() is a debugging feature, so we can pretend that is
not there... If you okay with this, I'll move mapping/unmapping out of
the spinlocks.

--
Best regards,Volodymyr Babchuk
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.