[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v2 05/13] optee: add fast calls handling
Hi Volodymyr, On 09/03/2018 05:54 PM, Volodymyr Babchuk wrote: Some fast SMCCC calls to OP-TEE should be handled in a special way. Capabilities exchange should be filtered out, so only caps known to mediator are used. Also mediator disables static SHM memory capability, because it can't share OP-TEE memory with a domain. Only domain can share memory with OP-TEE, so it ensures that OP-TEE supports dynamic SHM. Basically, static SHM is a reserved memory region which is always mapped into OP-TEE address space. It belongs to OP-TEE. Normally, NW is allowed to access there, so it can communicate with OP-TEE. On other hand, dynamic SHM is NW's own memory, which it can share with OP-TEE. OP-TEE maps this memory dynamically, when it wants to access it.s wo Because mediator can't share one static SHM region with all guests, it just disables it for all. Would it make sense to still allow the hardware domain to access static SHM? Signed-off-by: Volodymyr Babchuk <volodymyr_babchuk@xxxxxxxx> --- xen/arch/arm/tee/optee.c | 56 +++++++++++++++++++++++++++++++++++++++++++++++- 1 file changed, 55 insertions(+), 1 deletion(-) diff --git a/xen/arch/arm/tee/optee.c b/xen/arch/arm/tee/optee.c index 7bb84d9..48bff5d 100644 --- a/xen/arch/arm/tee/optee.c +++ b/xen/arch/arm/tee/optee.c @@ -56,7 +56,7 @@ static int optee_enable(struct domain *d) return 0; }-static void forward_call(struct cpu_user_regs *regs)+static bool forward_call(struct cpu_user_regs *regs) { struct arm_smccc_res resp;@@ -79,6 +79,20 @@ static void forward_call(struct cpu_user_regs *regs)set_user_reg(regs, 5, 0); set_user_reg(regs, 6, 0); set_user_reg(regs, 7, 0); + + return resp.a0 == OPTEE_SMC_RETURN_OK; +} + +static void set_return(struct cpu_user_regs *regs, uint32_t ret) +{ + set_user_reg(regs, 0, ret); + set_user_reg(regs, 1, 0); + set_user_reg(regs, 2, 0); + set_user_reg(regs, 3, 0); + set_user_reg(regs, 4, 0); + set_user_reg(regs, 5, 0); + set_user_reg(regs, 6, 0); + set_user_reg(regs, 7, 0); }static void optee_domain_destroy(struct domain *d)@@ -92,6 +106,39 @@ static void optee_domain_destroy(struct domain *d) &resp); }+static bool handle_exchange_capabilities(struct cpu_user_regs *regs)+{ + uint32_t caps; + + /* Filter out unknown guest caps */ + caps = get_user_reg(regs, 1); + caps &= OPTEE_SMC_NSEC_CAP_UNIPROCESSOR; I think it would make sense to introduce a define for the mask. + set_user_reg(regs, 1, caps); + + /* Forward call and return error (if any) back to the guest */ + if ( !forward_call(regs) ) + return true; + + caps = get_user_reg(regs, 1); + + /* Filter out unknown OP-TEE caps */ + caps &= OPTEE_SMC_SEC_CAP_HAVE_RESERVED_SHM | + OPTEE_SMC_SEC_CAP_UNREGISTERED_SHM | + OPTEE_SMC_SEC_CAP_DYNAMIC_SHM; Same here. + + /* Drop static SHM_RPC cap */ + caps &= ~OPTEE_SMC_SEC_CAP_HAVE_RESERVED_SHM; + + /* Don't allow guests to work without dynamic SHM */ + if ( !(caps & OPTEE_SMC_SEC_CAP_DYNAMIC_SHM) ) { Coding style. if ( ... ) { + set_return(regs, OPTEE_SMC_RETURN_ENOTAVAIL); + return true; + } + + set_user_reg(regs, 1, caps); Newline here. + return true; +} + static bool optee_handle_call(struct cpu_user_regs *regs) { switch ( get_user_reg(regs, 0) ) @@ -103,10 +150,17 @@ static bool optee_handle_call(struct cpu_user_regs *regs) case OPTEE_SMC_FUNCID_GET_OS_REVISION: case OPTEE_SMC_ENABLE_SHM_CACHE: case OPTEE_SMC_DISABLE_SHM_CACHE: + forward_call(regs); + return true; case OPTEE_SMC_GET_SHM_CONFIG: + /* No static SHM available for guests */ + set_return(regs, OPTEE_SMC_RETURN_ENOTAVAIL); + return true; case OPTEE_SMC_EXCHANGE_CAPABILITIES: + return handle_exchange_capabilities(regs); case OPTEE_SMC_CALL_WITH_ARG: case OPTEE_SMC_CALL_RETURN_FROM_RPC: + /* TODO: Add proper handling for this calls */ I think I would prefer if the call were not introduced in the first place. You then add call when they are actually implemented. forward_call(regs); return true; default: Cheers, -- Julien Grall _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxxx https://lists.xenproject.org/mailman/listinfo/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |