[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v3 07/11] optee: add support for RPC SHM buffers

Hi Julien,

Julien Grall writes:

>>>>>> diff --git a/xen/arch/arm/tee/optee.c b/xen/arch/arm/tee/optee.c
>>>>>> index dc90e2ed8e..771148e940 100644
>>>>>> --- a/xen/arch/arm/tee/optee.c
>>>>>> +++ b/xen/arch/arm/tee/optee.c
>>>>>> @@ -30,6 +30,12 @@
>>>>>>      * OP-TEE spawns a thread for every standard call.
>>>>>>      */
>>>>>>     #define MAX_STD_CALLS   16
>>>>>> +/*
>>>>>> + * Maximal number of pre-allocated SHM buffers. OP-TEE generally asks
>>>>>> + * for one SHM buffer per thread, so this also corresponds to OP-TEE
>>>>>> + * option CFG_NUM_THREADS
>>>>>> + */
>>>>> Same as patch #6 regarding CFG_NUM_THREADS.
>>>> Right now OP-TEE will not allocate more than one buffer per OP-TEE
>>>> thread. And I can see no reason why it would change. So, basically I can
>>>> remove this MAX_RPC_SHMS at all and use MAX_STD_CALLS instead. But then
>>>> it will be not so obvious, why I compare number of SHM buffers with
>>>> number of std calls. Thus, I think it is good to have separate
>>>> define and comment.
>>> I am not against have the 2 defines, what I was pointed out with the
>>> documentation on top is incorrect as patch #6.
>> I'm sorry, I don't quite understand. In patch #6 your concern was that
>> CFG_NUM_THREADS depends on platform, right?
> My first concern was the documentation does not reflect the reality
> because CFG_NUM_THREADS is not always equal to 16.

> Ideally we should be able to know the number of threads supported. But
> that could be a follow-up patch (or potentially ignored) if nothing
> bad can happen when Xen handles more thread than OP-TEE does.
No, it is valid situation. Opposite (when OP-TEE can handle more threads
than Xen) also can be handled in a right way.

> In any case, the documentation in Xen should reflect the reality.
Ah, okay, got it.

>>> If you happen to make MAX_STD_CALLS dynamic, then this should also be
>>> dynamic.
>> Of course.
>> [...]
>>>>>>     +static void handle_rpc_func_alloc(struct optee_domain *ctx,
>>>>>> +                                  struct cpu_user_regs *regs)
>>>>>> +{
>>>>>> +    paddr_t ptr = get_user_reg(regs, 1) << 32 | get_user_reg(regs, 2);
>>>>>> +
>>>>>> +    if ( ptr & (OPTEE_MSG_NONCONTIG_PAGE_SIZE - 1) )
>>>>>> +        gprintk(XENLOG_WARNING, "Domain returned invalid RPC command 
>>>>>> buffer\n");
>>>>> Should not you bail-out in that case? Also, I would turn it to a gdprintk.
>>>> OP-TEE does own checks and that check will fail also. Then OP-TEE will
>>>> issue request to free this SHM.
>>> I think it is better if we go on the safe-side. I.e if we know there
>>> would be an error (like here), then you need to return an error in
>>> from Xen rather than calling OP-TEE. Otherwise, you may end up to
>>> nasty problem.
>> Actually, I don't see how I can do this. This is an RPC response. I
>> can't return error to RPC response. All I can do is to mangle RPC
>> response in a such way, that OP-TEE surely will treat it as an error and
>> act accordingly.
> I am not sure to understand what you mean here. Surely if the address
> is not aligned, then OP-TEE will return an error too, right? So can't
> you emulate OP-TEE behavior in that case?
No, OP-TEE will not return an error. OP-TEE will handle it as an error.
For OP-TEE any RPC looks like a function call. So it excepts some return
value - buffer pointer in this case. If it gets invalid pointer, it
issues another RPC to free this buffer and then propagates error back to

So, if I'll return error back to the guest (or rather issue RPC to free
the buffer), OP-TEE will be still blocked, waiting for a pointer from
the guest. Of course I can go further end emulate a new buffer
allocation RPC but from Xen side in a hope that guest will provide valid
buffer this time. But, what if not? And why I should decide instead of

>>> Also, I think you want a comment (maybe in the commit message)
>>> explaining that OPTEE_MSG_NONCONTIG_PAGE_SIZE will always be equal to
>> It is always equal to 4096 bytes. But, AFAIK, XEN can work with other
>> PAGE_SIZEs, isn't? Actually, linux driver support other page sizes, it
>> just splits those pages into 4k chunks. The same can be done in XEN, but
>> I don't want to introduce this logic right now. The patches are complex
>> enough. Right now we have
>> enough, I hope.
> Xen only support 4KB page, but I wouldn't bet that in the future on
> Arm as we will require 64KB page for some features (i.e 52-bit
> support).
So, I was mistaken. For some reason I though that Xen can handle other
page sizes as well.

> I wasn't asking you to handle a different PAGE_SIZE, just to clarify
> in a comment on top that you expect the both defined to be
> valid. Another BUILD_BUG_ON is not necessary assuming you add the one
> in patch #6.
Okay, I'll add such comment.

Best regards,Volodymyr Babchuk
Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.