[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v2 10/13] optee: add support for RPC commands
Hi, On 11.09.18 16:56, Julien Grall wrote: On 10/09/18 19:14, Volodymyr Babchuk wrote:I think you need to modify your habit because I am pretty sure Linux folks would not allow:On 10.09.18 18:34, Julien Grall wrote:On 03/09/18 17:54, Volodymyr Babchuk wrote:static struct shm_buf *allocate_shm_buf(struct domain_ctx *ctx, uint64_t cookie, int pages_cnt)@@ -704,6 +732,28 @@ static bool copy_std_request_back(struct domain_ctx *ctx,return true; } +static void handle_rpc_return(struct domain_ctx *ctx, + struct cpu_user_regs *regs, + struct std_call_ctx *call) +{ + call->optee_thread_id = get_user_reg(regs, 3);+ call->rpc_op = OPTEE_SMC_RETURN_GET_RPC_FUNC(get_user_reg(regs, 0));+ + if ( call->rpc_op == OPTEE_SMC_RPC_FUNC_CMD ) + { + /* Copy RPC request from shadowed buffer to guest */+ uint64_t cookie = get_user_reg(regs, 1) << 32 | get_user_reg(regs, 2);+ struct shm_rpc *shm_rpc = find_shm_rpc(ctx, cookie);Newline between declaration and code.Sorry, another habit from kernel coding style :( Ah, yes, I read your previous message incorrectly. No, this, actually is part of the protocol. OP-TEE can ask to allocate more shared buffers, one per RPC return.+ if ( !shm_rpc ) + {+ gprintk(XENLOG_ERR, "Can't find SHM-RPC with cookie %lx\n", cookie);+ return; + } + memcpy(shm_rpc->guest_arg, shm_rpc->xen_arg, + OPTEE_MSG_GET_ARG_SIZE(shm_rpc->xen_arg->num_params)); + } +} + static bool execute_std_call(struct domain_ctx *ctx, struct cpu_user_regs *regs, struct std_call_ctx *call)@@ -715,8 +765,7 @@ static bool execute_std_call(struct domain_ctx *ctx,optee_ret = get_user_reg(regs, 0); if ( OPTEE_SMC_RETURN_IS_RPC(optee_ret) ) { - call->optee_thread_id = get_user_reg(regs, 3); - call->rpc_op = OPTEE_SMC_RETURN_GET_RPC_FUNC(optee_ret); + handle_rpc_return(ctx, regs, call);It would make sense to introduce handle_rpc_return where you actually add those 2 lines.return true; } @@ -783,6 +832,74 @@ out: return ret; } + +static void handle_rpc_cmd_alloc(struct domain_ctx *ctx, + struct cpu_user_regs *regs, + struct std_call_ctx *call, + struct shm_rpc *shm_rpc) +{+ if ( shm_rpc->xen_arg->params[0].attr != (OPTEE_MSG_ATTR_TYPE_TMEM_OUTPUT | + OPTEE_MSG_ATTR_NONCONTIG) )+ {+ gprintk(XENLOG_WARNING, "Invalid attrs for shared mem buffer\n");+ return; + } ++ /* Last entry in non_contig array is used to hold RPC-allocated buffer */+ if ( call->non_contig[MAX_NONCONTIG_ENTRIES - 1] ) + {+ free_xenheap_pages(call->non_contig[MAX_NONCONTIG_ENTRIES - 1],+ call->non_contig_order[MAX_NONCONTIG_ENTRIES - 1]); + call->non_contig[MAX_NONCONTIG_ENTRIES - 1] = NULL; + }This is quite odd. Why don't you just deny allocating information in the non_config array? This would avoid to silently dropped any page that may have been linked together and potentially used still in use.Please a give link to the spec and the paragraph. optee_msg.h, above #define OPTEE_MSG_RPC_CMD_SHM_ALLOC 6 (line 405 in my case). It also defines fragmented buffers, which currently aren't supported in OP-TEE. call->non_contig[x] is needed to hold list of pages until OP_TEE consumes them. The it can be freed and reused to allocate next buffer.Consider this:What is x? Index in non_contig array. The same principle works not only for buffers allocated by RPC, but for all other buffers as well. See below. 1. OP-TEE issues RPC "allocate buffer" 2. NW returns list of pages 3. Mediator translates and stores address in non_contig[x] 4. OP-TEE begins to consume this list 5. IRQ arrives and OP-TEE forced to break the work 6. Mediator receives control back, but it should not free non_contig[x], because it is not sure of OP-TEE finished reading from it 7. Xen/guest handles the IRQ and returns control back to OP-TEE 8. OP-TEE finishes processing this buffers and asks for another one 9. NW returns list of pages for the next buffer 10. At this point mediator is sure that OP-TEE finished processing old non_contig[x], so it can free it and allocated another.Thank you for the description of the protocol. However, it is still does not explain why you decided to free MAX_NONCONTIG_ENTRIES - 1. Why not 0 or 1 or n? Okay. You can pass up to 4 arguments for TA in command buffer. Any of that argument can be a shared memory reference, so we need at least 4 itemsin non_contig array. Sadly, this constant (TEE_NUM_PARAMS) is defined not optee_msg.h or optee_smc.h, but in tee_api_defines.h, which is user-space (or TA) part of OP-TEE. But, now I'm thinking, that maybe it is not so bad idea to add this file to XEN... What is your opinion?Anyways, we need at least 4 items for arguments. But I defined MAX_NONCONTIG_ENTRIES as 5, because last item in array is used for RPC-allocated buffer. I also added comment. I'll copy it there: /* Last entry in non_contig array is used to hold RPC-allocated buffer */So, first 4 items are used for arguments and last one used for RPC requests. Right way to define MAX_NONCONTIG_ENTRIES is (TEE_NUM_PARAMS + 1), but then I should add another header file, which defines GlobalPlatform TEE core API. Overall, it feels like to me you want to write more documentation about how the mediator is supposed to work. Looks like I need to start on how OP-TEE protocol is supposed to work...I tried to cover this in the commit messages, but looks like it is not sufficient. gprintk(XENLOG_WARNING, "Failed to allocate shm_rpc object\n");- ptr = 0; - } - else - ptr = mfn_to_maddr(shm_rpc->guest_mfn); + ptr = ~0;Can you explain why you change from 0 to ~0?I had to introduce this in the original patch, actually.What do you mean? Please ignore previous sentence. There should be 0. I'll remove this change. -- Volodymyr Babchuk _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxxx https://lists.xenproject.org/mailman/listinfo/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |