[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v3 07/11] optee: add support for RPC SHM buffers





On 17/01/2019 21:01, Volodymyr Babchuk wrote:

Hi,

Hi Volodymyr,

Julien Grall writes:

HI,

On 1/17/19 7:48 PM, Volodymyr Babchuk wrote:

Julien Grall writes:

Hi Volodymyr,

On 18/12/2018 21:11, Volodymyr Babchuk wrote:
From: Volodymyr Babchuk <vlad.babchuk@xxxxxxxxx>

[...]

diff --git a/xen/arch/arm/tee/optee.c b/xen/arch/arm/tee/optee.c
index dc90e2ed8e..771148e940 100644
--- a/xen/arch/arm/tee/optee.c
+++ b/xen/arch/arm/tee/optee.c
@@ -30,6 +30,12 @@
     * OP-TEE spawns a thread for every standard call.
     */
    #define MAX_STD_CALLS   16
+/*
+ * Maximal number of pre-allocated SHM buffers. OP-TEE generally asks
+ * for one SHM buffer per thread, so this also corresponds to OP-TEE
+ * option CFG_NUM_THREADS
+ */

Same as patch #6 regarding CFG_NUM_THREADS.
Right now OP-TEE will not allocate more than one buffer per OP-TEE
thread. And I can see no reason why it would change. So, basically I can
remove this MAX_RPC_SHMS at all and use MAX_STD_CALLS instead. But then
it will be not so obvious, why I compare number of SHM buffers with
number of std calls. Thus, I think it is good to have separate
define and comment.

I am not against have the 2 defines, what I was pointed out with the
documentation on top is incorrect as patch #6.
I'm sorry, I don't quite understand. In patch #6 your concern was that
CFG_NUM_THREADS depends on platform, right?

My first concern was the documentation does not reflect the reality because CFG_NUM_THREADS is not always equal to 16.

Ideally we should be able to know the number of threads supported. But that could be a follow-up patch (or potentially ignored) if nothing bad can happen when Xen handles more thread than OP-TEE does.

In any case, the documentation in Xen should reflect the reality.


If you happen to make MAX_STD_CALLS dynamic, then this should also be
dynamic.
Of course.

[...]
    +static void handle_rpc_func_alloc(struct optee_domain *ctx,
+                                  struct cpu_user_regs *regs)
+{
+    paddr_t ptr = get_user_reg(regs, 1) << 32 | get_user_reg(regs, 2);
+
+    if ( ptr & (OPTEE_MSG_NONCONTIG_PAGE_SIZE - 1) )
+        gprintk(XENLOG_WARNING, "Domain returned invalid RPC command 
buffer\n");

Should not you bail-out in that case? Also, I would turn it to a gdprintk.
OP-TEE does own checks and that check will fail also. Then OP-TEE will
issue request to free this SHM.

I think it is better if we go on the safe-side. I.e if we know there
would be an error (like here), then you need to return an error in
from Xen rather than calling OP-TEE. Otherwise, you may end up to
nasty problem.
Actually, I don't see how I can do this. This is an RPC response. I
can't return error to RPC response. All I can do is to mangle RPC
response in a such way, that OP-TEE surely will treat it as an error and
act accordingly.

I am not sure to understand what you mean here. Surely if the address is not aligned, then OP-TEE will return an error too, right? So can't you emulate OP-TEE behavior in that case?


Also, I think you want a comment (maybe in the commit message)
explaining that OPTEE_MSG_NONCONTIG_PAGE_SIZE will always be equal to
PAGE_SIZE.
It is always equal to 4096 bytes. But, AFAIK, XEN can work with other
PAGE_SIZEs, isn't? Actually, linux driver support other page sizes, it
just splits those pages into 4k chunks. The same can be done in XEN, but
I don't want to introduce this logic right now. The patches are complex
enough. Right now we have
BUILD_BUG_ON(PAGE_SIZE != OPTEE_MSG_NONCONTIG_PAGE_SIZE) and this is
enough, I hope.

Xen only support 4KB page, but I wouldn't bet that in the future on Arm as we will require 64KB page for some features (i.e 52-bit support).

I wasn't asking you to handle a different PAGE_SIZE, just to clarify in a comment on top that you expect the both defined to be valid. Another BUILD_BUG_ON is not necessary assuming you add the one in patch #6.

Cheers,

--
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.