[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] [PATCH RFC v3 0/12] Live migration for VMs with QEMU backed local storage



Hi all,

I have worked on a solution to be able to migrate domains that use QEMU as the
backend disk driver. I have adapted the migration flow and piggybacked on the
"drive-mirroring" capability already provided by QEMU.

Overview

1. The "xl migrate" command has an additional "-q" flag. When provided the local
storage of the domain is mirrored to the destination during the migration
process.

2. Internally, the modification consists on adding a new
libxl__stream_read_state struct to the libxl__domain_create_state structure and
libxl__stream_write_state struct to the libxl__domain_save_state struct.

3. Migration flow can now be divided into three phases:
   a. Phase One: Copies the necessary PFNs/params to start a QEMU process on the
      destination. QEMU is started with the "-incoming defer" option.
   b. Phase Two: Disk is mirrored using the QEMU embedded NBD server.
   c. Phase Three: Once the disk is completely mirrored, virtual RAM of the
      domain is live migrated to the destination. This phase most closely 
      resembles to the current migration flow.

4. If the “-q” option is not provided the migration is equivalent to the current
migration flow.

The new migration flow has follows the following major sequence of steps:
1. 1st stream copies the necessary PFNs and params from source to destination
to start the QEMU process in destination.
2. QEMU process is started on the destination with the option "-incoming defer".
(This creates the QEMU process but it doesn’t start running the main loop until
"migrate incoming" command is executed)
3. "drive mirror" QMP command is executed so that the disk is mirrored to the
destination node.
4. An event listener waits for the QMP BLOCK_JOB_READY event sent by QEMU which
signals that the "disk mirror job" is complete.
5. 2nd stream copies the virtual RAM from source to destination. At this point, 
the domain is suspended on source.
6. "migrate incoming" QMP command is executed in the destination node.
7. Domain is restored in destination.

Notes

1. Note that as of now "xen_platform_pci=0" for this feature to work. 
This is necessary so that the block devices are seen by QEMU. Further 
modification should be made for the case "xen_platform_pci=1" if we still 
want to use NBD mirroring capability provided by QEMU.
2. The current branch has still some hardcoded values but they can be easily
removed (I wanted initial feedback first):
    a. Port used for disks mirroring ("11000"): Changed by opening a socket and
       sending the port number to the source node.
    b. Name of the block devices ("ide0-hd0"): Currently the branch only 
       supports domains with one IDE drive. This constraint can easily be 
       removed by querying QEMU for the block-devices and checking their 
       backends on Xenstore. The name of the block devices to be mirrored 
       would be then sent to the destination node for starting the NBD server.
3.This feature needs a small patch to QEMU-XEN.

Here is a link to the Xen branch in Github:
https://github.com/balvisio/xen/tree/feature/migration_with_local_disks_mirroring

Here is a link to the QEMU-Xen branch in Github:
https://github.com/balvisio/qemu-xen/tree/feature/migration_with_local_disks_mirroring

Any feedback/suggestion is appreciated.

Cheers,

Bruno

Signed-off-by: Bruno Alvisio <bruno.alvisio@xxxxxxxxx>


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.