[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v4 07/10] xen/arm: optee: add support for arbitrary shared memory



Hi,

On 20/03/2019 16:39, Volodymyr Babchuk wrote:

Julien Grall writes:

On 07/03/2019 21:04, Volodymyr Babchuk wrote:
From: Volodymyr Babchuk <vlad.babchuk@xxxxxxxxx>

Some of the patches are using your EPAM e-mail addresss. Other are
using your gmail address. Could you confirm this is expected?

No, I'll make sure that all patches are authored by
<volodymyr_babchuk@xxxxxxxx>

And your signed-off-by as well :).

[...]

I can't promise Xen will only be 4K only. So it would be better to
make the number agnostic. Or at least writing clearly on top of the
definition that it is assumed 4KB (maybe with a BUILD_BUG_ON(PAGE_SIZE
!= 4096) if not already in place).

I can replace define with something like (67108864 / PAGE_SIZE). With
appropriate comment, of course.

Well, none of your code is ready for another PAGE_SIZE than 4KB. So the BUILD_BUG_ON(PAGE_SIZE != 4096) is probably more suitable.

[...]

   +static struct optee_shm_buf *allocate_optee_shm_buf(struct
optee_domain *ctx,
+                                                    uint64_t cookie,
+                                                    unsigned int pages_cnt,
+                                                    struct page_info *pg_list,
+                                                    unsigned int pg_list_order)
+{
+    struct optee_shm_buf *optee_shm_buf, *optee_shm_buf_tmp;
+    int old, new;
+    int err_code;
+
+    do
+    {
+        old = atomic_read(&ctx->optee_shm_buf_pages);
+        new = old + pages_cnt;
+        if ( new >= MAX_TOTAL_SMH_BUF_PG )

Again, the limitation is in number of page and quite high. What would
prevent a guest to register shared memory page by page? If nothing,
then I think you can end up to interesting issues in Xen because of
the growing list and memory used.

OP-TEE will limit this on it's side. In most cases Xen have much bigger
heap than OP-TEE :-)

The main problem is not the heap. The problem is the size of the list to browse.


Do you want me to limit this both in memory size and in number of buffers?

I am not necessarily asking to put a limitation. What I ask is documenting what can happen. So we can take proper action in the future (such as when deciding whether to security support it).

[...]


[...]
   static int optee_relinquish_resources(struct domain *d)
   {
       struct optee_std_call *call, *call_tmp;
       struct shm_rpc *shm_rpc, *shm_rpc_tmp;
+    struct optee_shm_buf *optee_shm_buf, *optee_shm_buf_tmp;
       struct optee_domain *ctx = d->arch.tee;
         if ( !ctx )
@@ -381,6 +552,13 @@ static int optee_relinquish_resources(struct domain *d)
       list_for_each_entry_safe( shm_rpc, shm_rpc_tmp, &ctx->shm_rpc_list, list 
)
           free_shm_rpc(ctx, shm_rpc->cookie);
   +    if ( hypercall_preempt_check() )
+        return -ERESTART;

Same question as the other hypercall_preempt_check().

Excuse me, but what question? I looked thru all your emails and can't
find one.

It looks like I have by mistake trimmed my question on the other patch :/.

Anyway, you should explain how you decide the placement of each hypercall_preempt_check(). For instance, if the lists are quite big, then you may want add a preempt check in the loop.

The key point here is to document the choice even if you wrote it is "random". This will help in the future if we need to revise preemption choice.


+
+    list_for_each_entry_safe( optee_shm_buf, optee_shm_buf_tmp,
+                              &ctx->optee_shm_buf_list, list )
+        free_optee_shm_buf(ctx, optee_shm_buf->cookie);
+
       return 0;
   }
   @@ -406,11 +584,186 @@ static void optee_domain_destroy(struct
domain *d)
         ASSERT(!spin_is_locked(&ctx->lock));
       ASSERT(!atomic_read(&ctx->call_count));
+    ASSERT(!atomic_read(&ctx->optee_shm_buf_pages));
       ASSERT(list_empty(&ctx->shm_rpc_list));
         XFREE(d->arch.tee);
   }
   +#define PAGELIST_ENTRIES_PER_PAGE                       \
+    ((OPTEE_MSG_NONCONTIG_PAGE_SIZE / sizeof(u64)) - 1)
+
+static size_t get_pages_list_size(size_t num_entries)
+{
+    int pages = DIV_ROUND_UP(num_entries, PAGELIST_ENTRIES_PER_PAGE);
+
+    return pages * OPTEE_MSG_NONCONTIG_PAGE_SIZE;
+}
+
+static int translate_noncontig(struct optee_domain *ctx,
+                               struct optee_std_call *call,
+                               struct optee_msg_param *param)
+{
+    uint64_t size;
+    unsigned int page_offset;
+    unsigned int num_pages;
+    unsigned int order;
+    unsigned int entries_on_page = 0;
+    gfn_t gfn;
+    p2m_type_t p2m;
+    struct page_info *guest_page, *xen_pages;
+    struct optee_shm_buf *optee_shm_buf;
+    /*
+     * This is memory layout for page list. Basically list consists of 4k 
pages,
+     * every page store 511 page addresses of user buffer and page address of
+     * the next page of list.
+     *
+     * Refer to OPTEE_MSG_ATTR_NONCONTIG description in optee_msg.h for 
details.
+     */
+    struct {
+        uint64_t pages_list[PAGELIST_ENTRIES_PER_PAGE];
+        uint64_t next_page_data;
+    } *pages_data_guest, *pages_data_xen;
+
+    /* Offset of user buffer withing page */

Offset of the buffer within the OP-TEE page

+    page_offset = param->u.tmem.buf_ptr & (OPTEE_MSG_NONCONTIG_PAGE_SIZE - 1);
+
+    /* Size of the user buffer in bytes */
+    size = ROUNDUP(param->u.tmem.size + page_offset,
+                   OPTEE_MSG_NONCONTIG_PAGE_SIZE);
+
+    num_pages = DIV_ROUND_UP(size, OPTEE_MSG_NONCONTIG_PAGE_SIZE);
+
+    order = get_order_from_bytes(get_pages_list_size(num_pages));
+
+    xen_pages = alloc_domheap_pages(current->domain, order, 0);
+    if ( !xen_pages )
+        return -ENOMEM;
+
+    optee_shm_buf = allocate_optee_shm_buf(ctx, param->u.tmem.shm_ref,
+                                           num_pages, xen_pages, order);

In an earlier version, I pointed out that you will allow to allocate
up to 64MB per call. This is a potential security issue on Xen. While
I said I would be happy to get this code merged as it, I would expect
to at least see a TODO in the code explaining potential problem. So we
know the problem exist and can't security support until this is fixed.
Sure. You want me to add this TODO there or inside
allocate_optee_shm_buf() function, where it actually does allocation?

I don't mind the placement as long as the TODO is there.


I may have missed other places while reviewing this version. Please go
back in the review I have made in the past and document all the
potential security holes.
Okay, will do.

+    if ( IS_ERR(optee_shm_buf) )
+        return PTR_ERR(optee_shm_buf);
+
+    gfn = gaddr_to_gfn(param->u.tmem.buf_ptr &
+                       ~(OPTEE_MSG_NONCONTIG_PAGE_SIZE - 1));
+
+    guest_page = get_page_from_gfn(current->domain, gfn_x(gfn), &p2m, 
P2M_ALLOC);
+    if ( !guest_page || p2m != p2m_ram_rw )
+        return -EINVAL;
+
+ pages_data_guest = __map_domain_page(guest_page);quite + pages_data_xen = __map_domain_page(xen_pages);
+
+    while ( num_pages )
+    {
+        struct page_info *page;

Newline here please.

+        page = get_page_from_gfn(current->domain,
+                  paddr_to_pfn(pages_data_guest->pages_list[entries_on_page]),
+                  &p2m, P2M_ALLOC);

The indentation is wrong here. But the problem is due to the long
name. For instance, you have 3 times the word "page" on the same
line. Is that necessary?
Code is quite complex. I want every variable to be as descriptive as
possible. In some cases it leads to issues like this :-)

There are way to simplify this code.

 1) The OP-TEE code contains the following pattern a few time.

     page = get_page_from_gfn(....)
     if ( !page || p2mt != p2m_ram_rw )
       ...

   You can create an helper to encapsulate that. I think you have one case where
   the p2mt check is different. So potentially you want to patch the type check
   in parameter.

2) You probably move __map_domain_page(guest_page/xen_pages) within the loop. And just deal with guest_page/xen_pages outside.

With that and the renaming, then the code will become suddenly a bit less 
complex.

Cheers,

--
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.