[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [RFC v01 0/3] xen/arm: introduce IOMMU driver for OMAP platforms



On Wed, 22 Jan 2014, Stefano Stabellini wrote:
> On Wed, 22 Jan 2014, Andrii Tseglytskyi wrote:
> > Hi,
> > 
> > The following patch series is an RFC for possible implementation of simple 
> > MMU module,
> > which is designed to translate IPA to MA for peripheral processors like GPU 
> > / IPU
> > for OMAP platforms. Currently on our OMAP platform (OMAP5 panda) we have 3 
> > external MMUs
> > which need to be handled properly.
> > 
> > It would be great to get a community feedback - will this be useful for Xen 
> > project?
> > 
> > Let me describe an algorithm briefly. It is simple and straightforward.
> > The following simple logic is used to translate addresses from IPA to MA:
> > 
> > 1. During boot time guest domain creates "pagetable" for external MMU IP.
> > Pagetable is a singletone data structure, which is stored in ususal kernel
> > heap memory. All memory mappings for corresponding MMU are stored inside it.
> > Format of "pagetable" is well defined.
> > 
> > 2. Guest domain enables peripheral remote processor. As a part of enable 
> > sequence
> > kernel allocates chunks of heap memory needed for remote processor and 
> > stores
> > pointers to allocated chunks in already created "pagetable". After it writes
> > a physical address of pagetable to MMU configuration register. As result 
> > MMU IP
> > knows about all allocations, and remote processor can use them directly in 
> > its
> > software.
> > 
> > 3. Xen omap mmu driver creates a trap for access to MMU configuration 
> > registers.
> > It reads a physical address of "pagetable" from MMU register and creates a 
> > copy
> > of it in own memory. As result - we have two similar configuration data 
> > structures -
> > first - in guest domain kernel, second - in Xen hypervisor.
> > 
> > 4. Xen omap mmu driver parses its own copy of pagetable and translate all 
> > physical
> > addresses to corresponding machine addresses using existing p2m API call.
> > After it writes a physical address  of its pagetable (with already 
> > translated PA to MA)
> > to MMU IP configuration registers and returns control to guest domain.
> > 
> > As a result - guest domain continues enabling remote processor with it MMU 
> > and MMU
> > will use new pagetable, modified by Xen omap mmu driver. New pagetable will 
> > be used
> > directly by MMU IP, and its new structure will be hidden for guest domain 
> > kernel,
> > it won't know anything about p2m translation.
> 
> Why don't you map Dom0 1:1 instead?
> If you enabled PLATFORM_QUIRK_DOM0_MAPPING_11 (now enabled by default on
> all platforms), all this wouldn't be necessary, right?

I guess you can't do just use the 1:1 because you are assigning the GPU
or IPU to a guest other than Dom0, right?


> 
> 
> > Verified with Xen 4.4-unstable, Linux kernel 3.8 as Dom0, Linux(Android) 
> > kernel 3.4 as DomU.
> > Target platform OMAP5 panda.
> > 
> > Thank you for your attention,
> > 
> > Regards,
> > 
> > Andrii Tseglytskyi (3):
> >   arm: omap: introduce iommu module
> >   arm: omap: translate iommu mapping to 4K pages
> >   arm: omap: cleanup iopte allocations
> > 
> >  xen/arch/arm/Makefile     |    1 +
> >  xen/arch/arm/io.c         |    1 +
> >  xen/arch/arm/io.h         |    1 +
> >  xen/arch/arm/omap_iommu.c |  492 
> > +++++++++++++++++++++++++++++++++++++++++++++
> >  4 files changed, 495 insertions(+)
> >  create mode 100644 xen/arch/arm/omap_iommu.c
> > 
> > -- 
> > 1.7.9.5
> > 
> > 
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@xxxxxxxxxxxxx
> > http://lists.xen.org/xen-devel
> > 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxx
> http://lists.xen.org/xen-devel
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.