[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v2 10/13] optee: add support for RPC commands
On 10/09/18 19:14, Volodymyr Babchuk wrote: I think you need to modify your habit because I am pretty sure Linux folks would not allow:On 10.09.18 18:34, Julien Grall wrote:On 03/09/18 17:54, Volodymyr Babchuk wrote:static struct shm_buf *allocate_shm_buf(struct domain_ctx *ctx, uint64_t cookie, int pages_cnt)@@ -704,6 +732,28 @@ static bool copy_std_request_back(struct domain_ctx *ctx,return true; } +static void handle_rpc_return(struct domain_ctx *ctx, + struct cpu_user_regs *regs, + struct std_call_ctx *call) +{ + call->optee_thread_id = get_user_reg(regs, 3);+ call->rpc_op = OPTEE_SMC_RETURN_GET_RPC_FUNC(get_user_reg(regs, 0));+ + if ( call->rpc_op == OPTEE_SMC_RPC_FUNC_CMD ) + { + /* Copy RPC request from shadowed buffer to guest */+ uint64_t cookie = get_user_reg(regs, 1) << 32 | get_user_reg(regs, 2);+ struct shm_rpc *shm_rpc = find_shm_rpc(ctx, cookie);Newline between declaration and code.Sorry, another habit from kernel coding style :( struct shm_rpc *rpc = find_shm(...); if ( rpc ) ... The correct way is: struct shm_rpc *rpc = find_shm(...); if ( rpc ) Notice the newline between struct and if. No, this, actually is part of the protocol. OP-TEE can ask to allocate more shared buffers, one per RPC return.+ if ( !shm_rpc ) + {+ gprintk(XENLOG_ERR, "Can't find SHM-RPC with cookie %lx\n", cookie);+ return; + } + memcpy(shm_rpc->guest_arg, shm_rpc->xen_arg, + OPTEE_MSG_GET_ARG_SIZE(shm_rpc->xen_arg->num_params)); + } +} + static bool execute_std_call(struct domain_ctx *ctx, struct cpu_user_regs *regs, struct std_call_ctx *call) @@ -715,8 +765,7 @@ static bool execute_std_call(struct domain_ctx *ctx, optee_ret = get_user_reg(regs, 0); if ( OPTEE_SMC_RETURN_IS_RPC(optee_ret) ) { - call->optee_thread_id = get_user_reg(regs, 3); - call->rpc_op = OPTEE_SMC_RETURN_GET_RPC_FUNC(optee_ret); + handle_rpc_return(ctx, regs, call);It would make sense to introduce handle_rpc_return where you actually add those 2 lines.return true; } @@ -783,6 +832,74 @@ out: return ret; } + +static void handle_rpc_cmd_alloc(struct domain_ctx *ctx, + struct cpu_user_regs *regs, + struct std_call_ctx *call, + struct shm_rpc *shm_rpc) +{+ if ( shm_rpc->xen_arg->params[0].attr != (OPTEE_MSG_ATTR_TYPE_TMEM_OUTPUT |+ OPTEE_MSG_ATTR_NONCONTIG) ) + {+ gprintk(XENLOG_WARNING, "Invalid attrs for shared mem buffer\n");+ return; + } ++ /* Last entry in non_contig array is used to hold RPC-allocated buffer */+ if ( call->non_contig[MAX_NONCONTIG_ENTRIES - 1] ) + { + free_xenheap_pages(call->non_contig[MAX_NONCONTIG_ENTRIES - 1], + call->non_contig_order[MAX_NONCONTIG_ENTRIES - 1]); + call->non_contig[MAX_NONCONTIG_ENTRIES - 1] = NULL; + }This is quite odd. Why don't you just deny allocating information in the non_config array? This would avoid to silently dropped any page that may have been linked together and potentially used still in use. Please a give link to the spec and the paragraph. call->non_contig[x] is needed to hold list of pages until OP_TEE consumes them. The it can be freed and reused to allocate next buffer.Consider this: What is x? 1. OP-TEE issues RPC "allocate buffer" 2. NW returns list of pages 3. Mediator translates and stores address in non_contig[x] 4. OP-TEE begins to consume this list 5. IRQ arrives and OP-TEE forced to break the work 6. Mediator receives control back, but it should not free non_contig[x], because it is not sure of OP-TEE finished reading from it 7. Xen/guest handles the IRQ and returns control back to OP-TEE 8. OP-TEE finishes processing this buffers and asks for another one 9. NW returns list of pages for the next buffer 10. At this point mediator is sure that OP-TEE finished processing old non_contig[x], so it can free it and allocated another. Thank you for the description of the protocol. However, it is still does not explain why you decided to free MAX_NONCONTIG_ENTRIES - 1. Why not 0 or 1 or n? Overall, it feels like to me you want to write more documentation about how the mediator is supposed to work. [...] Please try to avoid changing the coding style in different patch. But this one is wrong.Yep :( this is the artifact from splitting the big patch.gprintk(XENLOG_WARNING, "Failed to allocate shm_rpc object\n");- ptr = 0; - } - else - ptr = mfn_to_maddr(shm_rpc->guest_mfn); + ptr = ~0;Can you explain why you change from 0 to ~0?I had to introduce this in the original patch, actually. What do you mean? Cheers, -- Julien Grall _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxxx https://lists.xenproject.org/mailman/listinfo/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |