[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] [RFC PATCH for-next 00/18] VM forking



The following series implements VM forking for Intel HVM guests to allow for
the fast creation of identical VMs without the assosciated high startup costs
of booting or restoring the VM from a savefile.

The main design goal with this series has been to reduce the time of creating
the VM fork as much as possible. To achieve this the VM forking process is
split into two steps: 1) forking the VM and 2) starting its device model. This
is due to our observation that creation of the VM fork is fast while launching
the device model can be quite slow.

The first step involves creating a VM using the new "xl fork-vm" command. The
parent VM is expected to remain paused after forks are created from it (which
is different then what process forking normally entails). During this forking
operation the HVM context and VM settings are copied over to the new forked VM.
This operation is fast and it allows the forked VM to be unpaused and to be
monitored and accessed with VMI. Note however that without its device model
running (depending on what is executing in the VM) it is bound to
misbehave/crash when its trying to access devices that would be emulated by
QEMU. We anticipate that for certain use-cases this would be an acceptable
situation, in case for example when fuzzing is performed of code segments that
don't require such I/O devices.

The second step involves launching the device model for the forked VM, which
requires the QEMU Xen savefile to be generated manually from the parent VM.
This can be accomplished simply by connecting to its QMP socket and issuing the
"xen-save-devices-state" command as documented by QEMU:
https://github.com/qemu/qemu/blob/master/docs/xen-save-devices-state.txt
Once the QEMU Xen savefile is generated the new "xl fork-launch-dm" command is
used to launch QEMU and load the specified savefile for it.

At runtime the forked VM starts running with an empty p2m which gets lazily
populated when the VM generates EPT faults, similar to how altp2m views are
populated. If the memory access is a read-only access, the p2m entry is
populated with a memory shared entry with its parent. For write memory accesses
or in case memory sharing wasn't possible, a new page is allocated and the page
contents are copied over from the parent VM. Forks can be further forked if
needed, thus allowing for further memory savings.

The series has been tested with both Linux and Windows VMs and functions as
expected. VM forking time has been measured to be 0.018s, device model launch
to be around 1s depending largely on the number of devices being emulated.

Patches 1-2 implement changes to existing internal Xen APIs to make VM forking
possible.

Patches 3-4 are simple code-formatting fixes for the toolstack and Xen for the
memory sharing paths with no functional changes.

Patches 5-16 are code-cleanups and adjustments of to Xen memory sharing
subsystem with no functional changes.

Patch 17 adds the hypervisor-side code implementing VM forking.
Patch 18 adds the toolstack-side code implementing VM forking.

Tamas K Lengyel (18):
  x86: make hvm_{get/set}_param accessible
  xen/x86: Make hap_get_allocation accessible
  tools/libxc: clean up memory sharing files
  x86/mem_sharing: cleanup code in various locations
  x86/mem_sharing: make get_two_gfns take locks conditionally
  x86/mem_sharing: drop flags from mem_sharing_unshare_page
  x86/mem_sharing: don't try to unshare twice during page fault
  x86/mem_sharing: define mem_sharing_domain to hold some scattered
    variables
  x86/mem_sharing: Use INVALID_MFN and p2m_is_shared in
    relinquish_shared_pages
  x86/mem_sharing: Make add_to_physmap static and shorten name
  x86/mem_sharing: Convert MEM_SHARING_DESTROY_GFN to a bool
  x86/mem_sharing: Replace MEM_SHARING_DEBUG with gdprintk
  x86/mem_sharing: ASSERT that p2m_set_entry succeeds
  x86/mem_sharing: Enable mem_sharing on first memop
  x86/mem_sharing: Skip xen heap pages in memshr nominate
  x86/mem_sharing: check page type count earlier
  xen/mem_sharing: VM forking
  xen/tools: VM forking toolstack side

 tools/libxc/include/xenctrl.h     |  28 +-
 tools/libxc/xc_memshr.c           |  24 +-
 tools/libxl/libxl.h               |   6 +
 tools/libxl/libxl_create.c        | 212 +++++---
 tools/libxl/libxl_dm.c            |   2 +-
 tools/libxl/libxl_dom.c           |  83 ++--
 tools/libxl/libxl_internal.h      |   1 +
 tools/libxl/libxl_types.idl       |   1 +
 tools/xl/xl.h                     |   4 +
 tools/xl/xl_cmdtable.c            |  15 +
 tools/xl/xl_saverestore.c         |  69 +++
 tools/xl/xl_vmcontrol.c           |   8 +
 xen/arch/x86/hvm/hvm.c            | 206 ++++----
 xen/arch/x86/mm/hap/hap.c         |   3 +-
 xen/arch/x86/mm/mem_sharing.c     | 777 ++++++++++++++++++++----------
 xen/arch/x86/mm/p2m.c             |  34 +-
 xen/common/memory.c               |   2 +-
 xen/drivers/passthrough/pci.c     |   2 +-
 xen/include/asm-x86/hap.h         |   1 +
 xen/include/asm-x86/hvm/domain.h  |   7 +-
 xen/include/asm-x86/hvm/hvm.h     |   4 +
 xen/include/asm-x86/mem_sharing.h |  82 +++-
 xen/include/asm-x86/p2m.h         |  14 +-
 xen/include/public/memory.h       |   5 +
 xen/include/xen/sched.h           |   1 +
 25 files changed, 1094 insertions(+), 497 deletions(-)

-- 
2.20.1


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.