[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v4 06/10] xen/arm: optee: add support for RPC SHM buffers
Hi Volodymyr, On 07/03/2019 21:04, Volodymyr Babchuk wrote: From: Volodymyr Babchuk <vlad.babchuk@xxxxxxxxx> OP-TEE usually uses the same idea with command buffers (see previous commit) to issue RPC requests. Problem is that initially it has no buffer, where it can write request. So the first RPC request it makes is special: it requests NW to allocate shared buffer for other RPC requests. Usually this buffer is allocated only once for every OP-TEE thread and it remains allocated all the time until guest shuts down. Guest can ask OP-TEE to disable RPC buffers caching, in this case OP-TEE will ask guest to allocate/free buffer for the each RPC. Mediator needs to pin this buffer to make sure that domain can't transfer it to someone else. At the moment, Xen on Arm doesn't support transfer of a page between domain (see steal_page). What we want to prevent here is the domain to free the page (via XENMEM_decrease_reservation). If the reference drop to 0, the page will be freed and could potentially be allocated for Xen usage or another domain. Taking the reference here, will prevent it to free until the reference is dropped. So I would reword this sentence. Something like:"Mediator needs to pin the buffer to make sure the page will not be freed while it is shared with OP-TEE". Life cycle of this buffer is controlled by OP-TEE. It asks guest to create buffer and it asks it to free it. So it there is no much NIT: s/no/not/ sense to limit number of those buffers, as we limited number of NIT: s/limited/already limit the/ concurrent standard calls, because this can impair functionality of OP-TEE. Could you add a similar comment on top of call_count? The code looks good to me. Cheers, -- Julien Grall _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxxx https://lists.xenproject.org/mailman/listinfo/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |