[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: AW: [Xen-API] cross-pool migrate, with any kind of storage (shared or local)



Hi,

I think it is possible to change the disk beneath a VM by signaling tapdisk 
(via 'tap-ctl') -- reboots aren't necessary.

I've never used DRBD before.. but it sounds like I should check it out! :)

Thanks,
Dave

> -----Original Message-----
> From: George Shuklin [mailto:george.shuklin@xxxxxxxxx]
> Sent: 13 July 2011 17:04
> To: Uli StÃrk
> Cc: Dave Scott; xen-api@xxxxxxxxxxxxxxxxxxx
> Subject: Re: AW: [Xen-API] cross-pool migrate, with any kind of storage
> (shared or local)
> 
> Yes, yes, I'm talking about putting every virtual machine disk to
> StandAlone DRBD mode and reconfiguring it for replication to remote
> side.
> 
> As alternative it can be 'migrable' flag for vdi (vbd?) which parsing
> during plugging VBD and creating drbd device before any other operation.
> 
> ... And one more question: why we can not do this on line? If we have
> tapdisk code, it can suspend machine for few milliseconds, close
> original device/file, open it 'through' DRBD and resume guest machines.
> Because content of VDI is not changed (meta-data stored separately),
> this will be completely transparent to guest machine.
> 
> ... And this allow more interesting feature (we lacking right now) -
> live cross-SR migration for virtual machine.
> 
> Ð ÐÑÐ, 13/07/2011 Ð 15:25 +0000, Uli StÃrk ÐÐÑÐÑ:
> > A simple copy-operation should not require a complex DRBD-setup.
> >
> > DRBD would be nice for live migration. But you usually dont have a
> drbd-device for each disk, so would have to re-attach the disk in order
> to create a DRBD-device resulting in a vm-reboot (correct?). It would
> be nice to have a default DRBD overlay-device for each disk, so you
> could start a DRBD-Live-Migration at any time. It shouldnât have a too
> big (software interrupts?) performance impact  since the there is no
> Connection and further no sync-logic until the device gets connected.
> >
> >
> > -----UrsprÃngliche Nachricht-----
> > Von: xen-api-bounces@xxxxxxxxxxxxxxxxxxx [mailto:xen-api-
> bounces@xxxxxxxxxxxxxxxxxxx] Im Auftrag von George Shuklin
> > Gesendet: Mittwoch, 13. Juli 2011 17:07
> > An: Dave Scott
> > Cc: xen-api@xxxxxxxxxxxxxxxxxxx
> > Betreff: Re: [Xen-API] cross-pool migrate, with any kind of storage
> (shared or local)
> >
> > Wow! Great news!
> >
> > And idea: why not use DRBD? It suits perfectly for replication
> changes between two block devices, it will do all work, includes an
> dirty map tracking, synchronous writing and replicating with
> controllable speed.
> >
> > It also have different protocols for new writes replication: sync and
> async - perfect for any kind of replication.
> >
> > DRBD allows to keep  metadata  separately from media (disk is
> untouched by drbd during operating).
> >
> > The main disadvantage of DRBD is just and only TWO nodes - but this
> is perfectly suites for task 'replicate from ONE node to SECOND').
> >
> > And DRBD gurantee consistency between nodes, and it even supports
> primary-primary mode, which allow as to make migration lag (when VM not
> > operates) minimal.
> >
> > And it supports online reconfiguration of peer!
> >
> > I see no more perfect solution for this: it's already in product, in
> vanilla kernel and it have everything we need.
> >
> >
> >
> > Ð ÐÑÐ, 13/07/2011 Ð 15:21 +0100, Dave Scott ÐÐÑÐÑ:
> > > Hi,
> > >
> > > I've created a page on the wiki describing a new migration protocol
> for xapi. The plan is to make migrate work both within a pool and
> across pools, and to work with and without storage i.e. transparently
> migrate storage if necessary.
> > >
> > > The page is here:
> > >
> > > http://wiki.xensource.com/xenwiki/CrossPoolMigration
> > >
> > > The rough idea is to:
> > > 1. use an iSCSI target to export disks from the receiver to the
> > > transmitter 2. use tapdisk's log dirty mode to build a continuous
> disk
> > > copy program
> > > -- perhaps we should go the full way and use the tapdisk block
> mirror code to establish a full storage mirror?
> > > 3. use the VM metadata export/import to move the VM metadata
> between
> > > pools
> > >
> > > I'd also like to
> > > * make the migration code unit-testable (so I can test the failure
> > > paths easily)
> > > * make the code more robust to host failures by host heartbeating
> > > * make migrate properly cancellable
> > >
> > > I've started making a prototype-- so far I've written a simple
> python wrapper around the iscsi target daemon:
> > >
> > > https://github.com/djs55/iscsi-target-manager
> > >
> > > _______________________________________________
> > > xen-api mailing list
> > > xen-api@xxxxxxxxxxxxxxxxxxx
> > > http://lists.xensource.com/mailman/listinfo/xen-api
> >
> >
> >
> > _______________________________________________
> > xen-api mailing list
> > xen-api@xxxxxxxxxxxxxxxxxxx
> > http://lists.xensource.com/mailman/listinfo/xen-api
> 

_______________________________________________
xen-api mailing list
xen-api@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/mailman/listinfo/xen-api

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.