[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v3 07/11] optee: add support for RPC SHM buffers



Hi,

Julien Grall writes:

> HI,
>
> On 1/17/19 7:48 PM, Volodymyr Babchuk wrote:
>>
>> Julien Grall writes:
>>
>>> Hi Volodymyr,
>>>
>>> On 18/12/2018 21:11, Volodymyr Babchuk wrote:
>>>> From: Volodymyr Babchuk <vlad.babchuk@xxxxxxxxx>
>>>>
[...]
>>>>
>>>> diff --git a/xen/arch/arm/tee/optee.c b/xen/arch/arm/tee/optee.c
>>>> index dc90e2ed8e..771148e940 100644
>>>> --- a/xen/arch/arm/tee/optee.c
>>>> +++ b/xen/arch/arm/tee/optee.c
>>>> @@ -30,6 +30,12 @@
>>>>     * OP-TEE spawns a thread for every standard call.
>>>>     */
>>>>    #define MAX_STD_CALLS   16
>>>> +/*
>>>> + * Maximal number of pre-allocated SHM buffers. OP-TEE generally asks
>>>> + * for one SHM buffer per thread, so this also corresponds to OP-TEE
>>>> + * option CFG_NUM_THREADS
>>>> + */
>>>
>>> Same as patch #6 regarding CFG_NUM_THREADS.
>> Right now OP-TEE will not allocate more than one buffer per OP-TEE
>> thread. And I can see no reason why it would change. So, basically I can
>> remove this MAX_RPC_SHMS at all and use MAX_STD_CALLS instead. But then
>> it will be not so obvious, why I compare number of SHM buffers with
>> number of std calls. Thus, I think it is good to have separate
>> define and comment.
>
> I am not against have the 2 defines, what I was pointed out with the
> documentation on top is incorrect as patch #6.
I'm sorry, I don't quite understand. In patch #6 your concern was that
CFG_NUM_THREADS depends on platform, right?

> If you happen to make MAX_STD_CALLS dynamic, then this should also be
> dynamic.
Of course.

[...]
>>>>    +static void handle_rpc_func_alloc(struct optee_domain *ctx,
>>>> +                                  struct cpu_user_regs *regs)
>>>> +{
>>>> +    paddr_t ptr = get_user_reg(regs, 1) << 32 | get_user_reg(regs, 2);
>>>> +
>>>> +    if ( ptr & (OPTEE_MSG_NONCONTIG_PAGE_SIZE - 1) )
>>>> +        gprintk(XENLOG_WARNING, "Domain returned invalid RPC command 
>>>> buffer\n");
>>>
>>> Should not you bail-out in that case? Also, I would turn it to a gdprintk.
>> OP-TEE does own checks and that check will fail also. Then OP-TEE will
>> issue request to free this SHM.
>
> I think it is better if we go on the safe-side. I.e if we know there
> would be an error (like here), then you need to return an error in
> from Xen rather than calling OP-TEE. Otherwise, you may end up to
> nasty problem.
Actually, I don't see how I can do this. This is an RPC response. I
can't return error to RPC response. All I can do is to mangle RPC
response in a such way, that OP-TEE surely will treat it as an error and
act accordingly.

> Also, I think you want a comment (maybe in the commit message)
> explaining that OPTEE_MSG_NONCONTIG_PAGE_SIZE will always be equal to
> PAGE_SIZE.
It is always equal to 4096 bytes. But, AFAIK, XEN can work with other
PAGE_SIZEs, isn't? Actually, linux driver support other page sizes, it
just splits those pages into 4k chunks. The same can be done in XEN, but
I don't want to introduce this logic right now. The patches are complex
enough. Right now we have
BUILD_BUG_ON(PAGE_SIZE != OPTEE_MSG_NONCONTIG_PAGE_SIZE) and this is
enough, I hope.


--
Best regards, Volodymyr Babchuk
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.