[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

AW: [Xen-API] cross-pool migrate, with any kind of storage (shared or local)



A simple copy-operation should not require a complex DRBD-setup.

DRBD would be nice for live migration. But you usually dont have a  drbd-device 
for each disk, so would have to re-attach the disk in order to create a 
DRBD-device resulting in a vm-reboot (correct?). It would be nice to have a 
default DRBD overlay-device for each disk, so you could start a 
DRBD-Live-Migration at any time. It shouldnât have a too big (software 
interrupts?) performance impact  since the there is no Connection and further 
no sync-logic until the device gets connected.


-----UrsprÃngliche Nachricht-----
Von: xen-api-bounces@xxxxxxxxxxxxxxxxxxx 
[mailto:xen-api-bounces@xxxxxxxxxxxxxxxxxxx] Im Auftrag von George Shuklin
Gesendet: Mittwoch, 13. Juli 2011 17:07
An: Dave Scott
Cc: xen-api@xxxxxxxxxxxxxxxxxxx
Betreff: Re: [Xen-API] cross-pool migrate, with any kind of storage (shared or 
local)

Wow! Great news!

And idea: why not use DRBD? It suits perfectly for replication changes between 
two block devices, it will do all work, includes an dirty map tracking, 
synchronous writing and replicating with controllable speed.

It also have different protocols for new writes replication: sync and async - 
perfect for any kind of replication.

DRBD allows to keep  metadata  separately from media (disk is untouched by drbd 
during operating).

The main disadvantage of DRBD is just and only TWO nodes - but this is 
perfectly suites for task 'replicate from ONE node to SECOND').

And DRBD gurantee consistency between nodes, and it even supports 
primary-primary mode, which allow as to make migration lag (when VM not
operates) minimal.

And it supports online reconfiguration of peer!

I see no more perfect solution for this: it's already in product, in vanilla 
kernel and it have everything we need.



Ð ÐÑÐ, 13/07/2011 Ð 15:21 +0100, Dave Scott ÐÐÑÐÑ:
> Hi,
> 
> I've created a page on the wiki describing a new migration protocol for xapi. 
> The plan is to make migrate work both within a pool and across pools, and to 
> work with and without storage i.e. transparently migrate storage if necessary.
> 
> The page is here:
> 
> http://wiki.xensource.com/xenwiki/CrossPoolMigration
> 
> The rough idea is to:
> 1. use an iSCSI target to export disks from the receiver to the 
> transmitter 2. use tapdisk's log dirty mode to build a continuous disk 
> copy program
> -- perhaps we should go the full way and use the tapdisk block mirror code to 
> establish a full storage mirror?
> 3. use the VM metadata export/import to move the VM metadata between 
> pools
> 
> I'd also like to
> * make the migration code unit-testable (so I can test the failure 
> paths easily)
> * make the code more robust to host failures by host heartbeating
> * make migrate properly cancellable
> 
> I've started making a prototype-- so far I've written a simple python wrapper 
> around the iscsi target daemon:
> 
> https://github.com/djs55/iscsi-target-manager
> 
> _______________________________________________
> xen-api mailing list
> xen-api@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/mailman/listinfo/xen-api



_______________________________________________
xen-api mailing list
xen-api@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/mailman/listinfo/xen-api
_______________________________________________
xen-api mailing list
xen-api@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/mailman/listinfo/xen-api

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.