[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] xen/arm64: Use __flush_dcache_area instead of __flush_dcache_all


> > It would be nice to have cross-OS agreement on boot protocols, but at
> > the moment the table is somewhat empty beyond Linux and Xen. I had a
> > conversation with the FreeBSD guys working on 64-bit ARM stuff, but
> > they're still at an early stage, and I can't recall the specifics of
> > their boot process.
> I was thinking (perhaps naÃvely) that these problems would be mostly the
> same for any OS and that the solution ought to be specified in terms
> which allow any OS to know what to expect and/or what is expected of
> them. Really OSes ought to be designing their boot protocols within the
> set of constraints implied by the (improved) UEFI launching spec, not
> vice versa.

w.r.t. anything booting via UEFI, I would expect that to be covered by
the output of the UEFI forum. The cross-OS agreement would be for stuff
not covered by UEFI (e.g. booting without UEFI, whether to use the UEFI
memory map or one provided elsewhere, etc).


> > > Right, that's what I was thinking. UEFI enters bootloader with
> > > everything it has done all nice and clean and consistent. Anything the
> > > stub then does it is responsible for maintaining the cleanliness.
> > 
> > There are two horrible parts here:
> > 
> >  * EFI has no idea what a boot loader is. As far as it's aware, the
> >    kernel + efi stub is just another UEFI application until it calls
> >    ExitBootServices. For all UEFI knows, it may as well be a calculator
> >    until that point, and flushing the entire cache hierarchy for a
> >    calculator seems a little extreme.
> Most EFI applications are not that trivial though, and any non-trivial
> app is going to (with some reasonably high probability) need to touch
> the MMU. I don't see the problem with doing something which always works
> even if it might be overkill for some small subset of things you might
> be launching.

That sounds reasonable to me.

> >  * Defining "nice and clean and consistent".
> >   
> >    As far as I am aware, UEFI may have an arbitrary set of mappings
> >    present during boot services time, with arbitrary drivers active. 
> >    That means that UEFI can create dirty cache entries concurrently with
> >    the bootloader, in addition to the usual clean entries that can be
> >    allocated at any time thanks to speculative fetches.
> >    
> >    So while we're in the bootloader, any system level caches can have
> >    entries allocated to it, and as those aren't architected the only
> >    thing we can do is flush those by VA for the portions we care about.
> >    
> > So we can have "initially consistent", but that might not be useful.
> Hrm, yes, rather unfortunate.
> > 
> > > > There are a tonne of subtleties here, and certain properties we would
> > > > like (e.g. a completely clean cache hierarchy upon entry to the OS)
> > > > aren't necessarily possible to provide in general (thanks to the wonders
> > > > of non-architected system level caches, interaction with bootloaders,
> > > > etc).
> > > 
> > > I suppose it is easier for the UEFI implementation, since it knows the
> > > platform it runs on and there knows about the caches. Harder for the
> > > stub though :-/
> > 
> > Yeah. System-level caches interact badly with pretty much any scenario
> > where ownership of the MMU is transferred (UEFI boot, kexec), and there
> > doesn't seem to be a single agent that can be charged with ownership of
> > maintenance.
> > 
> > This is something I've been meaning to revisit, but it takes a while to
> > get back up to speed on the minutiae of the cache architecture and the
> > rules for memory attributes, and I haven't had the time recently.
> > 
> > We do have a very heavy hammer that we know will work: flushing the
> > memory by PA in the stub once the MMU and caches are disabled. A
> > back-of-the-envelope calculation shows that could take minutes to issue
> > on a server machine (say 2GHz, with 16GB of RAM), so that's very much a
> > last resort.
> Ouch...

Looking at that again, I was off by an order of 1000, and that actually
comes to about 0.13 seconds (though solely for CMO issue). So that might
not be as blunt as I made it out to be, but it's still not great as
platforms get larger.

> > We could try to manage the system caches explicitly, but then we need
> > code to do so very early, we need to have them described in the
> > appropriate firmware tables, and they need to be manageable from the
> > non-secure side (which I believe is not always the case). That somewhat
> > defeat the portability aspect of booting as an EFI application.
> > 
> > So yes, it's harder for the stub :
> Indeed.
> Probably this isn't even close to the correct venue. I'm not sure where
> better to transfer it though. One of the Linaro lists perhaps?

I'm not really sure where the right place is. There are quite a few
parties who have an interest in this problem (whether they realise it or
not). It would be nice to figure out more precisely what's happening
here first, anyhow.


Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.