[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [RFC v01 2/3] arm: omap: translate iommu mapping to 4K pages



On Wed, 22 Jan 2014, Andrii Tseglytskyi wrote:
> Patch introduces the following algorithm:
> - enumerates all first level translation entries
> - for each section creates 256 pages, each page is 4096 bytes
> - for each supersection creates 4096 pages, each page is 4096 bytes
> - flush cache to synchronize Cortex M15 and IOMMU
> 
> This algorithm make possible to use 4K mapping only.
> 
> Change-Id: Ie2cf45f23e0c170e9ba9d58f8dbb917348fdbd33
> Signed-off-by: Andrii Tseglytskyi <andrii.tseglytskyi@xxxxxxxxxxxxxxx>

I take that the first patch doesn't actually work without this one?
In that case it might make sense to just merge them into one.


>  xen/arch/arm/omap_iommu.c |   50 
> +++++++++++++++++++++++++++++++++++++++++----
>  1 file changed, 46 insertions(+), 4 deletions(-)
> 
> diff --git a/xen/arch/arm/omap_iommu.c b/xen/arch/arm/omap_iommu.c
> index 4dab30f..7ec03a2 100644
> --- a/xen/arch/arm/omap_iommu.c
> +++ b/xen/arch/arm/omap_iommu.c
> @@ -72,6 +72,9 @@
>  #define PTRS_PER_IOPTE               (1UL << (IOPGD_SHIFT - 
> IOPTE_SMALL_SHIFT))
>  #define IOPTE_TABLE_SIZE     (PTRS_PER_IOPTE * sizeof(u32))
>  
> +/* 16 sections in supersection */
> +#define IOSECTION_PER_IOSUPER        (1UL << (IOSUPER_SHIFT - IOPGD_SHIFT))
> +
>  /*
>   * some descriptor attributes.
>   */
> @@ -117,6 +120,9 @@ static struct mmu_info *mmu_list[] = {
>       &omap_dsp_mmu,
>  };
>  
> +static bool translate_supersections_to_pages = true;
> +static bool translate_sections_to_pages = true;
> +
>  #define mmu_for_each(pfunc, data)                                            
> \
>  ({                                                                           
>                                 \
>       u32 __i;                                                                
>                         \
> @@ -213,6 +219,29 @@ static u32 mmu_translate_pgentry(struct domain *dom, u32 
> iopgd, u32 da, u32 mask
>       return vaddr;
>  }
>  
> +static u32 mmu_iopte_alloc(struct mmu_info *mmu, struct domain *dom, u32 
> iopgd, u32 sect_num)
> +{
> +     u32 *iopte = NULL;
> +     u32 i;
> +
> +     iopte = xzalloc_bytes(PAGE_SIZE);
> +     if (!iopte) {
> +             printk("%s Fail to alloc 2nd level table\n", mmu->name);
> +             return 0;
> +     }
> +
> +     for (i = 0; i < PTRS_PER_IOPTE; i++) {
> +             u32 da, vaddr, iopgd_tmp;
> +             da = (sect_num << IOSECTION_SHIFT) + (i << IOPTE_SMALL_SHIFT);
> +             iopgd_tmp = (iopgd & IOSECTION_MASK) + (i << IOPTE_SMALL_SHIFT);
> +             vaddr = mmu_translate_pgentry(dom, iopgd_tmp, da, 
> IOPTE_SMALL_MASK);
> +             iopte[i] = vaddr | IOPTE_SMALL;
> +     }
> +
> +     flush_xen_dcache_va_range(iopte, PAGE_SIZE);
> +     return __pa(iopte) | IOPGD_TABLE;
> +}
> +
>  /*
>   * on boot table is empty
>   */
> @@ -245,13 +274,26 @@ static int mmu_translate_pagetable(struct domain *dom, 
> struct mmu_info *mmu)
>  
>               /* "supersection" 16 Mb */
>               if (iopgd_is_super(iopgd)) {
> -                     da = i << IOSECTION_SHIFT;
> -                     mmu->pagetable[i] = mmu_translate_pgentry(dom, iopgd, 
> da, IOSUPER_MASK);
> +                     if(likely(translate_supersections_to_pages)) {
> +                             u32 j, iopgd_tmp;
> +                             for (j = 0 ; j < IOSECTION_PER_IOSUPER; j++) {
> +                                     iopgd_tmp = iopgd + (j * 
> IOSECTION_SIZE);
> +                                     mmu->pagetable[i + j] = 
> mmu_iopte_alloc(mmu, dom, iopgd_tmp, i);
> +                             }
> +                             i += (j - 1);
> +                     } else {
> +                             da = i << IOSECTION_SHIFT;
> +                             mmu->pagetable[i] = mmu_translate_pgentry(dom, 
> iopgd, da, IOSUPER_MASK);
> +                     }
>  
>               /* "section" 1Mb */
>               } else if (iopgd_is_section(iopgd)) {
> -                     da = i << IOSECTION_SHIFT;
> -                     mmu->pagetable[i] = mmu_translate_pgentry(dom, iopgd, 
> da, IOSECTION_MASK);
> +                     if (likely(translate_sections_to_pages)) {
> +                             mmu->pagetable[i] = mmu_iopte_alloc(mmu, dom, 
> iopgd, i);
> +                     } else {
> +                             da = i << IOSECTION_SHIFT;
> +                             mmu->pagetable[i] = mmu_translate_pgentry(dom, 
> iopgd, da, IOSECTION_MASK);
> +                     }
>  
>               /* "table" */
>               } else if (iopgd_is_table(iopgd)) {

Since the 16MB and 1MB sections might not actually be contigous in
machine address space, this patch replaces the guest allocated sections
with pte tables pointing to the original IPAs. Is that right?

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.