[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v2 09/13] optee: add support for arbitrary shared memory
Hi, On 12.09.18 14:02, Julien Grall wrote: On 09/11/2018 08:33 PM, Volodymyr Babchuk wrote:Hi Julien,Hi,On 11.09.18 16:37, Julien Grall wrote:Hi Volodymyr, On 10/09/18 19:04, Volodymyr Babchuk wrote:On 10.09.18 17:02, Julien Grall wrote:On 03/09/18 17:54, Volodymyr Babchuk wrote:[...]4 buffers, but yes, it can be up to 8MB. Okay, I'll add per-call counter to limit memory usage for a whole call.So, in other words, I can translate only 2MB buffer (if 4096KB pages are used), is it right?+ if ( !pages_data_xen_start ) + return false; ++ shm_buf = allocate_shm_buf(ctx, param->u.tmem.shm_ref, num_pages);In alocate_shm_buf you are now globally limiting the number of pages ( (16384) to pin. However, this does not limit per call.With the current limit, you would could call up to 16384 times lookup_and_pin_guest_ram_addr(...). On Arm, for p2m related operation, we limit to 512 iterations in one go before checking the preemption.So I think 16384 times is far too much.2MB for the whole command. So if you have 5 buffer in the command, then the sum of the buffer should not be bigger than 2MB.That would need to be reduced to 2MB in total per call. You probably want to look at max_order(...). Yes, this is what I was saying. 512 pages per call. However, 2MB might be too big considering that you also need to account the SMC call. Does buffer can be passed for fast call?No, all such calls are yielding calls, so you can ignore time used for SMC call itself.How come you can ignore it? It has a cost to trap to EL3. Strictly speaking, yes. All steps has cost: trap to EL3, dispatch in EL3,switch to S-EL1, new thread preparation in OP-TEE, context switch in OP-TEE. I wanted to tell, that in my opinion, this is negligible in comparison with the actual call processing. But maybe, I'm wrong there. -- Volodymyr Babchuk _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxxx https://lists.xenproject.org/mailman/listinfo/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |