[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [ARM] Handling CMA pool device nodes in Dom0



On Tue, 29 Nov 2016, Julien Grall wrote:
> (CC Stefano)
> 
> On 25/11/16 12:19, Iurii Mykhalskyi wrote:
> > Hello!
> 
> Hi Iurii,
> 
> > 
> > I'm working under Renesas Gen3 H3 board with 4GB RAM (Salvator-X)
> > support in Xen mainline.
> > 
> > Salvator-X has several  CMA pool nodes, for example:
> > 
> > 1:
> > adsp_reserved: linux,adsp {
> > compatible = "shared-dma-pool";
> > reusable;
> > reg = <0x00000000 0x57000000 0x0 0x01000000>;
> > };
> > 
> > 2:
> > linux,cma {
> > compatible = "shared-dma-pool";
> > reusable;
> > reg = <0x00000000 0x58000000 0x0 0x18000000>;
> > linux,cma-default;
> > };
> > 
> > During Dom0 allocation, we can't guarantee, that allocated memory will
> > contain mentioned regions.
> > In second сase, we can actually hardcode mapped region by using separate
> > DTS for Dom0 with changed memory regions.
> > But for first one, this in not an option - this pool is used for audio
> > DSP and its firmware relies on this addresses.
> > 
> > What is the correct way to solve this situation?
> > Does Xen has some mechanism to handle such cases?
> 
> From my understanding all the nodes you mentioned are living under the node
> /reserved-memory, right? Currently Xen is not parsing this node.
> 
> Before answering about possible implementation in Xen, I would like to
> understand what are the constraints on these reserved memory regions.
> 
> I understand that when "reg" property is specified, it is a static allocation
> and we need to be able to map those regions at the same address in DOM0.
> 
> However, do these regions need to be included in memory node?

Another question: what caching attributes do they need in the stage2 mapping?
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.