[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v03 04/10] arm: introduce remoteprocessor iommu module



Hi Andrii,

I'm still concerned about security with this patch. Only one guest should be able to access to a specific remoteproc at the same time. Therefore it should be the same for the MMU.

AFAIU, even with XSM, you are still registering every MMU for every domain that want to access to a remoteproc.

You have to find a way to say "This domain will be able to handle this remoteproc", maybe by keeping a list of MMU per domain.

On 02/09/14 08:46, Andrii Tseglytskyi wrote:
diff --git a/xen/arch/arm/remoteproc/Makefile b/xen/arch/arm/remoteproc/Makefile
new file mode 100644
index 0000000..0b0ee0e
--- /dev/null
+++ b/xen/arch/arm/remoteproc/Makefile
@@ -0,0 +1 @@

[..]

+
+static struct mmu_info *mmu_list[] = {

Maybe "static const"?

AFAIU, you will add callbacks in this structure in subsequent patch, right?

If so, I would let the remoteproc driver adds the callback itself, even if it's done at compile time.

You can give a look to the platform code.

[..]

+static struct mmu_pagetable *mmu_alloc_pagetable(struct mmu_info *mmu, paddr_t 
paddr)
+{
+    struct mmu_pagetable *pgt;
+    u32 pgt_size = MMU_PGD_TABLE_SIZE(mmu);
+
+    pgt = xzalloc_bytes(sizeof(struct mmu_pagetable));
+    if ( !pgt )
+    {
+        pr_mmu(mmu, "failed to alloc pagetable structure");

allocate

+        return NULL;
+    }
+
+    /* allocate pagetable managed by hypervisor */
+    pgt->hyp_pagetable = xzalloc_bytes(pgt_size);
+    if ( !pgt->hyp_pagetable )
+    {
+        pr_mmu(mmu, "failed to alloc private hyp_pagetable");


allocate

+        return NULL;
+    }
+
+    /* alocate pagetable for ipa storing */

allocate

IPA

[..]

+paddr_t remoteproc_iommu_translate_second_level(struct mmu_info *mmu,
+                                                struct mmu_pagetable *pgt,
+                                                paddr_t maddr, paddr_t 
hyp_addr)
+{
+    u32 *pte_table = NULL, *hyp_pte_table = NULL;
+    u32 i;
+
+    /* map second level translation table */
+    pte_table = map_domain_page(maddr>>PAGE_SHIFT);
+    if ( !pte_table )
+    {

map_domain_page can't fail. Therefore the check is not necessary.


+        pr_mmu(mmu, "failed to map pte table");
+        return INVALID_PADDR;
+    }
+
+    clean_and_invalidate_xen_dcache_va_range(pte_table, PAGE_SIZE);

I would add a comment explaining why "clean_and_invalidate_..." is necessary here.

+    /* allocate new second level pagetable once */
+    if ( 0 == hyp_addr )
+    {
+        hyp_pte_table = xzalloc_bytes(PAGE_SIZE);
+        if ( !hyp_pte_table )
+        {
+            pr_mmu(mmu, "failed to alloc new pte table");
+            return INVALID_PADDR;
+        }
+    }
+    else
+    {
+        hyp_pte_table = __va(hyp_addr & PAGE_MASK);
+    }

Braces are not necessary.

[..]

+    unmap_domain_page(pte_table);
+
+    clean_and_invalidate_xen_dcache_va_range(hyp_pte_table, 
MMU_PTE_TABLE_SIZE(mmu));

We use to put a blank line before the last return.

+    return __pa(hyp_pte_table);
+}
+
+static int mmu_init(struct mmu_info *mmu, u32 data)
+{
+    ASSERT(mmu);
+    ASSERT(!mmu->mem_map);
+

Shouldn't you check that the remoteproc MMU will work with the current board?

As it's generic, people may want to run the same Xen on multiple platform, some of them may use remoteproc. We don't want to let the other board to use some (if not all) MMU drivers.

[..]

+static int mmu_register_mmio_handler(struct mmu_info *mmu, u32 data)
+{
+    struct domain *dom = (struct domain *) data;
+
+    register_mmio_handler(dom, &remoteproc_mmio_handler_ops,
+                          mmu->mem_start,
+                          mmu->mem_size);
+
+    pr_mmu(mmu, "register mmio handler dom %u base 0x%"PRIpaddr", size 
0x%"PRIpaddr"",
+           dom->domain_id, mmu->mem_start, mmu->mem_size);
+
+    return 0;
+}

Thinking about the MMIO handler, I would extend register_mmio_handler to take a data pointer in parameter. This pointer will be your MMU. It will avoid the waste of time looking to the MMU and make the code a simpler.


[..]

+
+int remoteproc_iommu_register_mmio_handlers(struct domain *dom)
+{
+    int res;
+
+    if ( is_idle_domain(dom) )
+        return -EPERM;
+
+    /* check is domain allowed to access remoteproc MMU */
+    res = xsm_domctl(XSM_HOOK, dom, XEN_DOMCTL_access_remote_pagetable);
+    if ( res )
+    {
+        printk(XENLOG_ERR "dom %u is not allowed to access remoteproc MMU res 
(%d)",
+               dom->domain_id, res);
+        return -EPERM;
+    }
+
+    return mmu_for_each(mmu_register_mmio_handler, (u32)dom);
+}

I don't see any call of this function in xen/arch/arm/domain.c. Is it normal?

To continue on my comment at the beginning of this mail, I don't think we should register every MMU callbacks to each domain which will use remoteproc.

Also, what about domain destruction? Shouldn't you free the MMU page table to save space?

+
+static int mmu_init_all(void)
+{
+    int res;
+
+    res = mmu_for_each(mmu_init, 0);
+    if ( res )
+    {
+        printk("%s error during init %d\n", __func__, res);
+        return res;
+    }
+
+    return 0;
+}

Again, what will happen if an MMU has been initialized? I guess bad things... do_initcalls doesn't check the return of the initcall and will silently ignore any error.

I'm not sure if the common (i.e ARM & x86) sense is to return 0 if sucess. So may be a panic will be the best here.

+__initcall(mmu_init_all);
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/include/asm-arm/remoteproc_iommu.h 
b/xen/include/asm-arm/remoteproc_iommu.h
new file mode 100644
index 0000000..6fa78ee
--- /dev/null
+++ b/xen/include/asm-arm/remoteproc_iommu.h
@@ -0,0 +1,82 @@
+/*
+ * xen/include/xen/remoteproc_iommu.h

xen/include/asm-arm/remoteproc_iommu.h

+ *
+ * Andrii Tseglytskyi <andrii.tseglytskyi@xxxxxxxxxxxxxxx>
+ * Copyright (c) 2014 GlobalLogic
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ */
+
+#ifndef _REMOTEPROC_IOMMU_H_
+#define _REMOTEPROC_IOMMU_H_

We use to add the directory in the header name. I.E

__ARM_REMOTEPROC_IOMMU_H__

And of course fixing at the end of the file too :).

+struct mmu_info {
+    const char  *name;
+    const struct pagetable_data *pg_data;
+    /* register where phys pointer to pagetable is stored */
+    u32                 *trap_offsets;
+    paddr_t             mem_start;
+    paddr_t             mem_size;
+    spinlock_t          lock;
+    struct list_head    pagetables_list;
+    u32                 num_traps;
+    void __iomem               *mem_map;

Hard Tab.

+    paddr_t    (*translate_pfunc)(struct mmu_info *, struct mmu_pagetable *);

Same here.

+    int (*copy_pagetable_pfunc)(struct mmu_info *mmu, struct mmu_pagetable 
*pgt);
+    void (*print_pagetable_pfunc)(struct mmu_info *);
+};
+
+int remoteproc_iommu_register_mmio_handlers(struct domain *dom);
+
+paddr_t remoteproc_iommu_translate_second_level(struct mmu_info *mmu,
+                                                 struct mmu_pagetable *pgt,
+                                                 paddr_t maddr, paddr_t 
hyp_addr);
+
+#endif /* _REMOTEPROC_IOMMU_H_ */

We use to add the following lines at the end of the file:

/*
 * Local variables:
 * mode: C
 * c-file-style: "BSD"
 * c-basic-offset: 4
 * indent-tabs-mode: nil
 * End:
 */

Regards,


--
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.