[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2 07/13] optee: add std call handling





On 11.09.18 14:19, Julien Grall wrote:


On 10/09/18 18:37, Volodymyr Babchuk wrote:
Hi Julien,

Hi,

On 05.09.18 18:17, Julien Grall wrote:
Hi,

On 09/03/2018 05:54 PM, Volodymyr Babchuk wrote:
Main way to communicate with OP-TEE is to issue standard SMCCC

NIT: The main way

call. "Standard" is a SMCCC term and it means that call can be
interrupted and OP-TEE can return control to NW before completing
the call.

In contranst with fast calls, where arguments and return values

NIT: s/contranst/contrast/

are passed in registers, standard calls use shared memory. Register
pair r1,r2 holds 64-bit PA of command buffer, where all arguments

Do you mean w1, w2?
Good question. How to call the registers, so it would be valid both
for ARMv7 and ARMv8?

Which naming does OP-TEE use? Is it a*?

Exactly. I've seen the same convention in XEN, in smccc code. So, a*, then?

[...]
This function might be a bit tricky to implement using access_guest_memory_by_ipa. However, you can just map the region temporarily. This would avoid to use map_domain_page_global() within then code and avoid one less potentially error in the cleanup path.
I think, I can make all needed manipulations in the shadowed buffer
and then copy it as a whole to the guest's one.

That would quite dangerous as you may copy back data in the guest you don't want. So I would prefer the whitelist solution as you do below.

Okay, then I can map the needed guest page temporary.


+
+    call->guest_arg->ret = call->xen_arg->ret;
+    call->guest_arg->ret_origin = call->xen_arg->ret_origin;
+    call->guest_arg->session = call->xen_arg->session;
+    for ( i = 0; i < call->xen_arg->num_params; i++ ) {

for ( ... )
{

+        attr = call->xen_arg->params[i].attr;
+
+        switch ( attr & OPTEE_MSG_ATTR_TYPE_MASK ) {

switch ( ... )
{

+        case OPTEE_MSG_ATTR_TYPE_TMEM_OUTPUT:
+        case OPTEE_MSG_ATTR_TYPE_TMEM_INOUT:
+            call->guest_arg->params[i].u.tmem.size =
+                call->xen_arg->params[i].u.tmem.size;
+            continue;
+        case OPTEE_MSG_ATTR_TYPE_VALUE_OUTPUT:
+        case OPTEE_MSG_ATTR_TYPE_VALUE_INOUT:
+            call->guest_arg->params[i].u.value.a =
+                call->xen_arg->params[i].u.value.a;
+            call->guest_arg->params[i].u.value.b =
+                call->xen_arg->params[i].u.value.b;
+            continue;
+        case OPTEE_MSG_ATTR_TYPE_RMEM_OUTPUT:
+        case OPTEE_MSG_ATTR_TYPE_RMEM_INOUT:
+            call->guest_arg->params[i].u.rmem.size =
+                call->xen_arg->params[i].u.rmem.size;
+            continue;
+        case OPTEE_MSG_ATTR_TYPE_NONE:
+        case OPTEE_MSG_ATTR_TYPE_RMEM_INPUT:
+        case OPTEE_MSG_ATTR_TYPE_TMEM_INPUT:
+            continue;
+        }
+    }
+
+    return true;
+}
+
+static bool execute_std_call(struct domain_ctx *ctx,
+                             struct cpu_user_regs *regs,
+                             struct std_call_ctx *call)
+{
+    register_t optee_ret;
+
+    forward_call(regs);

I find a bit odd that you introduce a return for forward_call in the previous patch. But you barely use it.
You see, division of original big patch into smaller ones is a bit
arbitrary, so you can spot artifacts like this. I'll try to fix
them.

I would recommend to think about the split from the beginning of your work, not afterward. If you can't split easily, then it likely means the code review is going to be difficult to do.

Sure, usually I do patches in this way. But in this case I developed entirely new feature and it took number of iterations to get it work in the way I wanted. Thus, one big patch, which is was split into series.

But here the main problem is not really the split. The problem is you have a function return a value. That value should *always* be used or checked. If you don't care about the value, the you mostly likely want a comment on top (and possible an ASSERT) stating this will never fail.

Okay, I got the idea. Thanks.



--
Volodymyr Babchuk

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.