[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] [RFC PATCH 0/3] Live update boot memory management



When doing a live update, Xen needs to be very careful not to scribble
on pages which contain guest memory or state information for the
domains which are being preserved.

The information about which pages are in use is contained in the live
update state passed from the previous Xen — which is mostly just a
guest-transparent live migration data stream, except that it points to
the page tables in place in memory while traditional live migration
obviously copies the pages separately.

Our initial implementation actually prepended a list of 'in-use' ranges
to the live update state, and made the boot allocator treat them the
same as 'bad pages'. That worked well enough for initial development
but wouldn't scale to a live production system, mainly because the boot
allocator has a limit of 512 memory ranges that it can keep track of,
and a real system would end up more fragmented than that.

My other concern with that approach is that it required two passes over
the domain-owned pages. We have to do a later pass *anyway*, as we set
up ownership in the frametable for each page — and that has to happen
after we've managed to allocate a 'struct domain' for each page_info to
point to. If we want to keep the pause time due to a live update down
to a bare minimum, doing two passes over the full set of domain pages
isn't my favourite strategy.

So we've settled on a simpler approach — reserve a contiguous region
of physical memory which *won't* be used for domain pages. Let the boot
allocator see *only* that region of memory, and plug the rest of the
memory in later only after doing a full pass of the live update state.

This means that we have to ensure the reserved region is large enough,
but ultimately we had that problem either way — even if we were
processing the actual free ranges, if the page_info grew and we didn't
have enough contiguous space for the new frametable we were hosed
anyway.

So the straw man patch ends up being really simple, as a seed for
bikeshedding. Just take a 'liveupdate=' region on the command line,
which kexec(8) can find from the running Xen. The initial Xen needs to
ensure that it *won't* allocate any pages from that range which will
subsequently need to be preserved across live update, which isn't done
yet. We just need to make sure that any page which might be given to
share_xen_page_with_guest() is allocated appropriately.

The part which actually hands over the live update state isn't included
yet, so this really does just *defer* the addition of the memory until
a little bit later in __start_xen(). Actually taking ranges out of it
will come later.


David Woodhouse (3):
      x86/setup: Don't skip 2MiB underneath relocated Xen image
      x86/boot: Reserve live update boot memory
      Add KEXEC_RANGE_MA_LIVEUPDATE

 xen/arch/x86/machine_kexec.c |  15 ++++--
 xen/arch/x86/setup.c         | 122 +++++++++++++++++++++++++++++++++++++++----
 xen/include/public/kexec.h   |   1 +
 3 files changed, 124 insertions(+), 14 deletions(-)

Attachment: smime.p7s
Description: S/MIME cryptographic signature

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.