[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] linux stubdom



On Wed, 2013-01-30 at 15:35 +0000, Markus Hochholdinger wrote:
> > > Do you know how I can live migrate a domU which depends on a driver domU?
> > > How can I migrate the driver domU?
> > > For my understanding the block device has to be there on the destination
> > > dom0 before live migration begins but is also used on the source dom0
> > > from the migrating, but still running, domU.
> > Not quite, when you migrate there is a pause period while the final copy
> > over occurs and at this point you can safely remove the device from the
> > source host and make it available on the target host. The toolstack will
> 
> Isn't the domU on the destination created with all its virtual devices before 
> the migration starts?

No

> What if the blkback is not ready on the destination host?

We have to arrange that it is.

>  Am I missing something?

Migration is a staged process.
     1. First an empty shell domain (with no devices) is created on the
        target host.
     2. Then we copy the memory over, in several iterations, while the
        domain is running on the source host (iterations happen to
        handle the guest dirtying memory as we copy, this is the "live"
        aspect of the migration).
     3. After some iterations of live migration we pause the source
        guest
     4. Now we copy the remaining dirty RAM
     5. Tear down devices on the source host
     6. Setup devices on the target host for the incoming domain
     7. Resume the guest on the target domain
     8. Guest reconnects to new backend

The key point is that the devices are only ever active on either the
source or the target host and never both. The domain is paused during
this final transfer (from #3 until #7) and therefore guest I/O is
quiesced.

In your scenario I would expect that in the interval of #5,#6 you would
migrate the associated driver domain.

> 
> > ensure that the block device is only ever active on one end of the other
> > and never on both -- otherwise you would get potential corruption.
> 
> Yeah, this is the problem! If I migrate the active raid1 logic within the 
> domU 
> (aka linux software raid1) I don't have to care. I'll try to accomplish the 
> same with a "helper" domU very near to the normal domU and which is live 
> migrated while the normal domU is migrated.

This might be possible but as I say the more normal approach would be to
have a "RAID" domain on both hosts and dynamically map and unmap the
backing guest disks at steps #5 and #6 above.


> > While you could migrate the driver domain during the main domU's pause
> > period it is much more normal to simply have a driver domain on each
> > host and dynamically configure the storage as you migrate.
> 
> If I dynamically create the software raid1 I have to add a lot of checks 
> which 
> I don't need now.
> I've already thought about a software raid1 in the dom0 and the resulting md 
> device as xvda for a domU. But I have to assemble the md device on the 
> destination host before I can deactivate the md device on the source host. 

No you don't, you deactivate on the source (step #5) before activating
on the target (step #6).

> The 
> race condition is, if I deactivate the md device on the source host while 
> data 
> is only written to one of the two devices. On the destination host my raid1 
> seems clean but my two devices differ. The other race condition is, if my 
> raid1 is inconsistent while assembling on the destination host.

I'd have thought that shutting down the raid in step #5 and reactivating
in step #6 would guarantee that neither of these were possible.

> > > Can I combine a driver domU to a normal domU like I can combine a stubdom
> > > with a normal domU?
> > If you want, it would be more typical to have a single driver domain
> > providing block services to all domains (or one per underlying physical
> > block device).
> 
> I want :-) A single driver domain would need more logic (for me) while doing 
> live migrations.

OK, but be aware that you are treading into unexplored territory, most
people do things the other way. This means you are likely going to have
to do a fair bit of heavy lifting yourself.

Ian.


_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxx
http://lists.xen.org/xen-users


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.