[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] [PATCH 0/8] Early cleanups and bug fixes in preparation for live update



Picking out the things from the live update tree which are ready to be
merged.

A couple of actual bug fixes discovered along the way, and a weird off-
by-2MiB error with the start of the Xen image in really early memory
management that wasn't *strictly* a bug because those pages did get
reclaimed and fed into the heap in the end, but is annoying enough that
I want to fix it (and eventually I want the live update reserved
bootmem to fit snugly under the Xen image, so that the slack space we
reserve can be used for *either* of them to grow).

Make it possible to use vmap() earlier, which came out of Wei's work on
removing the directmap and is also needed for live update.

Finally a little bit of preparation/cleanup of __setup_xen() to make
way for what's to come, but which stands alone.

David Woodhouse (7):
      x86/smp: reset x2apic_enabled in smp_send_stop()
      x86/setup: Fix badpage= handling for memory above HYPERVISOR_VIRT_END
      x86/setup: Don't skip 2MiB underneath relocated Xen image
      xen/vmap: allow vmap() to be called during early boot
      x86/setup: move vm_init() before end_boot_allocator()
      x86/setup: simplify handling of initrdidx when no initrd present
      x86/setup: lift dom0 creation out into create_dom0() function

Wei Liu (1):
      xen/vmap: allow vm_init_type to be called during early_boot

 xen/arch/x86/setup.c    | 194 +++++++++++++++++++++++++-----------------------
 xen/arch/x86/smp.c      |   1 +
 xen/common/page_alloc.c |  82 +++++++++++++++++++-
 xen/common/vmap.c       |  45 ++++++++---
 4 files changed, 219 insertions(+), 103 deletions(-)


Attachment: smime.p7s
Description: S/MIME cryptographic signature

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.