[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] xen/arm: On chip memory mappings

On Tue, 2015-05-19 at 15:16 +1000, Edgar E. Iglesias wrote:
> Hi,
> I'd like to support the assignment of on chip RAMs to guests, starting with 
> dom0.
> The mmio-sram compatible device kinda works already but the 2nd stage MMU
> mapping is done with DEVICE memory attributes. This doesn't work well for
> SRAMs for several reasons (e.g performance, alignment checks etc).
> I guess we could add special treatment of these nodes to create Normal memory 
> mappings.

Yes. Ideally we would figure out some sort of automated way to decide
this, but I'm not sure what that would look like.

Otherwise I think we are looking at a list of compatible values which
are mapped as normal memory.

> The rules for combining the memory attributes from S1 and S2 translations
> suggest that mapping things at S2 with Normal memory Inner/Outer WB cacheable
> would give the guest/S1 flexibility in choosing the final attributes.
> It seems to me like guest drivers have the best knowledge to decide how to
> map the node memory regions.
> Keeping the S2 shareability set to inner (like we already do for memory)
> seems to be a good idea though.
> So the question I had is, why do we map nodes at S2 with DEVICE attributes at 
> all?
> Am I missing something?

I think the concern was exposing potentially UNPREDICTABLE /
IMPLEMENTATION DEFINED etc behaviour via a guest which maps MMIO regions
as normal memory in S1. By using a device memory mapping in S2 we force
a safe overall result.

I've not refreshed my memory on the way round this goes though, perhaps
the worry is/was unfounded. In particular perhaps on v8 this ends up as
CONSTRAINED UNPREDICTABLE which might be safe enough (again, I've not

I'd rather not have v7 and v8 differ in such a fundamental default, but
it might be justified I suppose. Likewise for e.g. doing something
different for dom0/hw-dom vs. others.


Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.