[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] linux stubdom



> > >  Am I missing something?
> > Migration is a staged process.
> >      1. First an empty shell domain (with no devices) is created on the
> >         target host.
> >      2. Then we copy the memory over, in several iterations, while the
> >         domain is running on the source host (iterations happen to
> >         handle the guest dirtying memory as we copy, this is the "live"
> >         aspect of the migration).
> >      3. After some iterations of live migration we pause the source
> >         guest
> >      4. Now we copy the remaining dirty RAM
> >      5. Tear down devices on the source host
> >      6. Setup devices on the target host for the incoming domain
> >      7. Resume the guest on the target domain
> >      8. Guest reconnects to new backend
> > The key point is that the devices are only ever active on either the
> > source or the target host and never both. The domain is paused during
> > this final transfer (from #3 until #7) and therefore guest I/O is
> > quiesced.
> 
> At what point are scripts like
>  disk = [ ".., script=myblockscript.sh" ]
> executed? Would this be between #3 and #7?

It is part of the device teardown and setup, so it is during #5 and #6
(strictly I think it is just after #5 and just before #6).

On xen-devel at the minute there is a patch series under discussion to
make the script hooks more flexible, in particular adding pre and post
migrate hooks (called something like #1-#3 and #7-#7) which can pre
setup bits of the storage stack which are safe to do with the guest
running but might be slow to initialise (e.g. iSCSI login, but not
opening the device). I don't think this needs to affect you though.

> > > > ensure that the block device is only ever active on one end of the
> > > > other and never on both -- otherwise you would get potential
> > > > corruption.
> > > Yeah, this is the problem! If I migrate the active raid1 logic within the
> > > domU (aka linux software raid1) I don't have to care. I'll try to
> > > accomplish the same with a "helper" domU very near to the normal domU
> > > and which is live migrated while the normal domU is migrated.
> > This might be possible but as I say the more normal approach would be to
> > have a "RAID" domain on both hosts and dynamically map and unmap the
> > backing guest disks at steps #5 and #6 above.
> 
> With the above info, that block devices are removed and added in the right 
> order while doing live migration, I'm thinking more and more about a driver 
> domain.
> 
> But in the first place I'll test the stopping and assembling of md devices in 
> the dom0s while migrating. If this works I could put this job into a driver 
> domain. Wow, this gives me a new view of the setup.

Excellent ;-)

Ian.


_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxx
http://lists.xen.org/xen-users


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.