[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [RFC for-4.8 0/6] xen/arm: Add support for mapping mmio-sram nodes into dom0



On Wed, 25 May 2016, Julien Grall wrote:
> Hi Stefano,
> 
> On 25/05/16 10:43, Stefano Stabellini wrote:
> > > For SRAM it would be normal memory uncached (?) when the property
> > > "no-memory-wc" is not present, else TBD.
> > > 
> > > I suspect we would have to relax more MMIOs in the future. Rather than
> > > providing a function to map, the code is very similar except the memory
> > > attribute, I suggest to provide a list of compatible with the memory
> > > attribute
> > > to use.
> > > 
> > > All the children node would inherit the memory attribute of the parent.
> > > 
> > > What do you think?
> > 
> > That would work for device tree, but we still need to rely on the
> > hypercall for ACPI systems.
> > 
> > Given that it is not easy to add an additional parameter to
> > XENMEM_add_to_physmap_range, I think we'll have to provide a new
> > hypercall to allow setting attributes other than the Xen default. That
> > could be done in Xen 4.8 and Linux >= 4.9.
> 
> There is no need to introduce a new hypercall. The XENMEM_add_to_physmap_batch
> contains an unused field ('foreign_id', to be renamed) for mapping device
> MMIOs (see Jan's mail [1]).
> 
> XENMEM_add_to_physmap will always map with the default memory attribute
> (Device_nGnRnE) and if the kernel want to use another memory attribute, it
> will have to use XENMEM_add_to_physmap_batch.
> 
> With the plan suggested in [2], there are no modifications required in Linux
> for the moment.
> 
> Regards,
> 
> [1] http://lists.xenproject.org/archives/html/xen-devel/2016-05/msg02341.html
> [2] http://lists.xenproject.org/archives/html/xen-devel/2016-05/msg02347.html

I read the separate thread. Sounds good.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.