[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] xen/arm64: Use __flush_dcache_area instead of __flush_dcache_all


> >> >    As far as I am aware, UEFI may have an arbitrary set of mappings
> >> >    present during boot services time, with arbitrary drivers active.
> >> >    That means that UEFI can create dirty cache entries concurrently with
> >> >    the bootloader, in addition to the usual clean entries that can be
> >> >    allocated at any time thanks to speculative fetches.
> UEFI specifies that memory in the EFI memory map is flat mapped, but
> I'd have to look to see if
> it prohibits other mappings in addition to that.  Other mappings are
> implementation
> dependent (devices, etc. or memory not in the EFI memory map.)

Regardless of the set of mapping that may exist, the key point is that
we don't know what may have been allocated into a cache. Any portion of
memory could have entries in the cache hierarchy, which could be clean
or dirty.

> In reviewing the Aarch64 specific portion of the spec (section 2.3.6
> Aarch64 Platforms)
> it says in part:
> Â Implementations of boot services will enable architecturally
> manageable caches and TLBs i.e.
>   those that can be managed directly using implementation independent
> registers using
>   mechanisms and procedures defined in the ARM Architecture Reference
> Manual. They should
>   not enable caches requiring platform information to manage or invoke
> non-architectural cache/
>   TLB lockdown mechanisms.
> Does this imply that system level caches should not be enabled?

Arguably yes, but on a technicality no, because it's possible to flush
them by VA (albeit extremely slowly).

> UEFI also specifies uni-processor, so we don't have to worry about
> other cores' caches.


> The spec does not mention the details of memory attributes - EDK2 currently 
> maps
> memory as non-shared, attributes 0xFF.


> >> >
> >> >    So while we're in the bootloader, any system level caches can have
> >> >    entries allocated to it, and as those aren't architected the only
> >> >    thing we can do is flush those by VA for the portions we care about.
> Maybe the firmware is 'wrong' to enable these caches?

It is certainly arguable.

> Are we guaranteed that these caches can be disabled on all
> implementations?

I believe on some implementations the non-secure side will not have
access to the control registers. Beyond that I don't know.

> Updating/clarifying the spec to have these disabled could simplify the
> problem a bit.

Possibly, yes. I'm not sure what we'd clarify it to say, however.

> >> > So we can have "initially consistent", but that might not be useful.
> >>
> >> Hrm, yes, rather unfortunate.
> >>
> >> >
> >> > > > There are a tonne of subtleties here, and certain properties we would
> >> > > > like (e.g. a completely clean cache hierarchy upon entry to the OS)
> >> > > > aren't necessarily possible to provide in general (thanks to the 
> >> > > > wonders
> >> > > > of non-architected system level caches, interaction with bootloaders,
> >> > > > etc).
> >> > >
> >> > > I suppose it is easier for the UEFI implementation, since it knows the
> >> > > platform it runs on and there knows about the caches. Harder for the
> >> > > stub though :-/
> >> >
> >> > Yeah. System-level caches interact badly with pretty much any scenario
> >> > where ownership of the MMU is transferred (UEFI boot, kexec), and there
> >> > doesn't seem to be a single agent that can be charged with ownership of
> >> > maintenance.
> >> >
> >> > This is something I've been meaning to revisit, but it takes a while to
> >> > get back up to speed on the minutiae of the cache architecture and the
> >> > rules for memory attributes, and I haven't had the time recently.
> >> >
> >> > We do have a very heavy hammer that we know will work: flushing the
> >> > memory by PA in the stub once the MMU and caches are disabled. A
> >> > back-of-the-envelope calculation shows that could take minutes to issue
> >> > on a server machine (say 2GHz, with 16GB of RAM), so that's very much a
> >> > last resort.
> >>
> >> Ouch...
> >
> > Looking at that again, I was off by an order of 1000, and that actually
> > comes to about 0.13 seconds (though solely for CMO issue). So that might
> > not be as blunt as I made it out to be, but it's still not great as
> > platforms get larger.
> I think we should be able to limit the memory we need to flush, as
> there should be no
> need to flush the free memory, just what is in use.  I think that good
> portions, if not all of that
> could be flushed from the C code with caches enabled, as we know they won't be
> modified after that point (FDT, initrd, etc.)  We can do this in C
> code after calling
> ExitBootServices(), and immediately before calling the Xen entry point
> efi_xen_start().
> There are no EFI calls in this path between the last bit of C code and
> the disabling
> of caches and MMU, so I think we should be able to identify if
> anything would need
> to be flushed in the ASM code with caches off.

I agree the vast majority of this maintenance could be done by C code.

There might be a need to flush that free memory, depending on how it is
mapped, unless you are proposing a lazy flush-before-use strategy.

> >> > We could try to manage the system caches explicitly, but then we need
> >> > code to do so very early, we need to have them described in the
> >> > appropriate firmware tables, and they need to be manageable from the
> >> > non-secure side (which I believe is not always the case). That somewhat
> >> > defeat the portability aspect of booting as an EFI application.
> >> >
> >> > So yes, it's harder for the stub :
> >>
> >> Indeed.
> >>
> >> Probably this isn't even close to the correct venue. I'm not sure where
> >> better to transfer it though. One of the Linaro lists perhaps?
> >
> > I'm not really sure where the right place is. There are quite a few
> > parties who have an interest in this problem (whether they realise it or
> > not). It would be nice to figure out more precisely what's happening
> > here first, anyhow.
> >
> > Mark.
> Glad I'm not the only one confused :)  Getting back to the practical
> side of this,
> I'm thinking I (or Suravee) should update the patch to add the
> flushing of the FDT,
> as this is required for booting with the change to flush_dcache_area(), even 
> if
> the exact mechanism isn't understood.  This gets us a more correct and working
> implementation, but not a final/robust implementation.

On a practical front, yes.

It would be nice to know if the attributes are actually the problem.
Is it possible to build a UP Xen which maps memory as UEFI does (i.e.
non-shareable)? Or is that problematic?


Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.