[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH RFC 1/2] docs/design: Add a design document for Live Update



On Fri, 2021-05-07 at 14:15 +0200, Jan Beulich wrote:
> On 07.05.2021 13:44, Julien Grall wrote:
[...]
> > 
> > It is a known convenient place. It may be difficult to find a
> > similar 
> > spot on host that have been long-running.
> 
> I'm not convinced: If it was placed in the kexec area at a 2Mb
> boundary, it could just run from there. If the kexec area is
> large enough, this would work any number of times (as occupied
> ranges become available again when the next LU cycle ends).

To make sure the next Xen can be loaded and run anywhere in case kexec
cannot find large enough memory under 4G, we need to:

1. teach kexec to load the whole image contiguously. At the moment
kexec prepares scattered 4K pages which are not runnable until they are
copied to a contiguous destination. (What if it can't find a contiguous
range?)

2. teach Xen that it can be jumped into with some existing page tables
which point to itself above 4G. We can't do real/protected mode entry
because it needs to start below 4G physically. Maybe a modified version
of the EFI entry path (my familiarity with Xen EFI entry is limited)?

3. rewrite all the early boot bits that assume Xen is under 4G and its
bundled page tables for below 4G.

These are the obstacles off the top of my head. So I think there is no
fundamental reason why we have to place Xen #2 where Xen #1 was, but
doing so is a massive reduction of pain which allows us to reuse much
of the existing Xen code.

Maybe, this part does not have to be part of the ABI and we just
suggest this as one way of loading the next Xen to cope with growth?
This is the best way I can think of (loading Xen where it was and
expand into the reserved bootmem if needed) that does not need to
rewrite a lot of early boot code and can pretty much guarantee success
even if memory is tight and fragmented.

Hongyan

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.