[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v3 08/11] optee: add support for arbitrary shared memory



On 19/02/2019 15:59, Volodymyr Babchuk wrote:
Hello Julien,

Hi Volodymyr,

Julien Grall writes:

Hi Volodymyr,

On 18/12/2018 21:11, Volodymyr Babchuk wrote:
From: Volodymyr Babchuk <vlad.babchuk@xxxxxxxxx>

Shared memory is widely used by NW to communicate with
TAs in OP-TEE. NW can share part of own memory with
TA or OP-TEE core, by registering it OP-TEE, or by providing
a temporal reference. Anyways, information about such memory
buffers are sent to OP-TEE as a list of pages. This mechanism
is described in optee_msg.h.

Mediator should step in when NW tries to share memory with
OP-TEE for two reasons:

1. Do address translation from IPA to PA.
2. Pin domain pages till they are mapped into OP-TEE or TA

Looking at the code, I think the page are mapped while OP-TEE is using
them. If so, it should be s/till/while/.

     address space, so domain can't transfer this pages to
     other domain or balloon out them. >
Address translation is done by translate_noncontig(...) function.
It allocates new buffer from xenheap and then walks on guest
provided list of pages, translates addresses and stores PAs into
newly allocated buffer. This buffer will be provided to OP-TEE
instead of original buffer from the guest. This buffer will
be free at the end of standard call.

In the same time this function pins pages and stores them in
struct optee_shm_buf object. This object will live all the time,
when given SHM buffer is known to OP-TEE. It will be freed
after guest unregisters shared buffer. At this time pages
will be unpinned.

We don't need to do any special reference counting because OP-TEE
tracks buffer on its side. So, mediator will unpin pages only
when OP-TEE returns successfully from OPTEE_MSG_CMD_UNREGISTER_SHM
call.

Signed-off-by: Volodymyr Babchuk <vlad.babchuk@xxxxxxxxx>
---

   Changes from v2:
   - Made sure that guest does not tries to register shared buffer with
     the same cookie twice
   - Fixed coding style
   - Use access_guest_memory_by_ipa() instead of direct memory mapping

   xen/arch/arm/tee/optee.c | 274 +++++++++++++++++++++++++++++++++++++++
   1 file changed, 274 insertions(+)

diff --git a/xen/arch/arm/tee/optee.c b/xen/arch/arm/tee/optee.c
index 771148e940..cfc3b34df7 100644
--- a/xen/arch/arm/tee/optee.c
+++ b/xen/arch/arm/tee/optee.c
@@ -37,6 +37,10 @@
    */
   #define MAX_RPC_SHMS    MAX_STD_CALLS
   +/* Maximum total number of pages that guest can share with OP-TEE
*/
+#define MAX_TOTAL_SMH_BUF_PG    16384

Please explain in the commit message and code how you came up to this value.

Basically it is taken from head. I just needed some number. You see,
number of registered shared memory buffer solely depends on free heap in
OP-TEE. But the same heap is used for other purposes, so it is hard to
tell how much pages can be shared with OP-TEE. I assumed that 64M ought
to be enough for anybody.

Such explanation would be fine for me in the commit message and the code. The goal is to log how we came up with the value (even if it is random). This would help us if we need to refine the value in the future.


Probably it is worth to make this configurable via domctl interface.
It is not strictly necessary for now. We can refine it in the future if needed.

[...]

+    page_offset = param->u.tmem.buf_ptr & (OPTEE_MSG_NONCONTIG_PAGE_SIZE - 1);
+
+    /* Size of the user buffer in bytes */
+    size = ROUNDUP(param->u.tmem.size + page_offset,
+                   OPTEE_MSG_NONCONTIG_PAGE_SIZE);
+
+    num_pages = DIV_ROUND_UP(size, OPTEE_MSG_NONCONTIG_PAGE_SIZE);
+
+    order = get_order_from_bytes(get_pages_list_size(num_pages));
+
+    pages_data_xen_start = alloc_xenheap_pages(order, 0);

By using alloc_xenheap_pages, you may end-up allocation more memory
than necessary when the order is getting bigger. What is the bigger
order you expect here?

It depend on MAX_TOTAL_SMH_BUF_PG. One page can contain up to 511
entries, plus reference to the next one. So, basically at max I will
need about 32 pages, which gives order 5-6.
I think, I can try xzalloc or domheap there. But looks like there is no
xmem_pool for the domheap. So, I still will be forced to use
alloc_domheap_pages().
xmem_pool is not used when you allocate buffer larger than PAGE_SIZE. Instead, xmalloc will allocate an order bigger than free unused page (see xmalloc_whole_pages).

It is not overly critical to allocate more for now, but it would be nice to add it as a TODO in the code.

Cheers,

--
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.