[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] [PATCH v8 00/13] Prerequisite patches for COLO



This patchset is Prerequisite for COLO feature. Refer to:
http://wiki.xen.org/wiki/COLO_-_Coarse_Grain_Lock_Stepping

Patch status:
1. Acked patches: patch 2-4, 6-13
2. Reviewd patches: all
3. New patches: none
Note:
1. Patch 1 and 7 is updated according to Wei Liu's comments
2. Patch 2-3 is updated because patch 1 is updated
3. Patch 8, 9, 11, 12 in v7 is moved to another series
4. Patch 13, 14 in v7 is fold into one patch(patch 9)
5. The commit message for patch 5 is not updated(wait the reply
   from Ian C, and Ian J)

You can get the codes from here:
https://github.com/wencongyang/xen/tree/colo_pre_v8
You can get the whole colo related patches from here:
https://github.com/wencongyang/xen/tree/colo_v10

v6->v7:
 - Addressed comments from Konrad Rzeszutek Wilk

v5->v6:
 - Fix some bugs found in the test

v4->v5:
 - Rebased to the latest xen
 - Addressed comments from last round

v3->v4:
 - Rebased to the latest migration v2 branch
 - Addressed comments from last round

v2->v3:
 - Merge '[PATCH v2 0/6] Misc cleanups for libxl' into this patchset
   for easy review
 - Addressed review comments
 - Add back channel to libxc
 - Introduce should_checkpoint callback
 - Introduce DIRTY_BITMAP record on libxc side
 - Introduce COLO_CONTEXT record on libxl side
 - Ported to Libxl migration v2

v1->v2:
 - Rebased to [PATCH v2 0/6] Misc cleanups for libxl
 - Add a bugfix for the error handling of process_record

Wen Congyang (13):
  libxl/remus: init checkpoint callback in Remus setup callback
  tools/libxl: move remus code into libxl_remus.c
  tools/libxl: move save/restore code into libxl_dom_save.c
  libxl/save: Refactor libxl__domain_suspend_state
  tools/libxc: support to resume uncooperative HVM guests
  tools/libxl: introduce enum type libxl_checkpointed_stream
  migration/save: pass checkpointed_stream from libxl to libxc
  tools/libxl: export logdirty_init
  tools/libxl: rename remus device to checkpoint device
  tools/libxl: adjust the indentation
  tools/libxl: store remus_ops in checkpoint device state
  tools/libxl: move remus state into a seperate structure
  tools/libxl: seperate device init/cleanup from checkpoint device layer

 tools/libxc/include/xenguest.h        |   6 +-
 tools/libxc/xc_nomigrate.c            |   3 +-
 tools/libxc/xc_resume.c               |  25 +-
 tools/libxc/xc_sr_common.h            |  12 +-
 tools/libxc/xc_sr_save.c              |  17 +-
 tools/libxl/Makefile                  |   4 +-
 tools/libxl/libxl.c                   |  81 +---
 tools/libxl/libxl.h                   |  19 +
 tools/libxl/libxl_checkpoint_device.c | 282 +++++++++++++
 tools/libxl/libxl_create.c            |  44 +-
 tools/libxl/libxl_dom.c               | 740 ----------------------------------
 tools/libxl/libxl_dom_save.c          | 521 ++++++++++++++++++++++++
 tools/libxl/libxl_dom_suspend.c       | 207 ++++++----
 tools/libxl/libxl_internal.h          | 217 ++++++----
 tools/libxl/libxl_netbuffer.c         | 117 +++---
 tools/libxl/libxl_nonetbuffer.c       |  10 +-
 tools/libxl/libxl_remus.c             | 424 +++++++++++++++++++
 tools/libxl/libxl_remus_device.c      | 327 ---------------
 tools/libxl/libxl_remus_disk_drbd.c   |  56 +--
 tools/libxl/libxl_save_callout.c      |   4 +-
 tools/libxl/libxl_save_helper.c       |   3 +-
 tools/libxl/libxl_stream_read.c       |   7 +-
 tools/libxl/libxl_stream_write.c      |  18 +-
 tools/libxl/libxl_types.idl           |  10 +-
 tools/libxl/xl_cmdimpl.c              |  18 +-
 25 files changed, 1709 insertions(+), 1463 deletions(-)
 create mode 100644 tools/libxl/libxl_checkpoint_device.c
 create mode 100644 tools/libxl/libxl_dom_save.c
 create mode 100644 tools/libxl/libxl_remus.c
 delete mode 100644 tools/libxl/libxl_remus_device.c

-- 
2.5.0




_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.