[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] live migration on SAN


  • To: "Fast Jack" <fastjack75@xxxxxxxxx>, "Xen list" <xen-users@xxxxxxxxxxxxxxxxxxx>
  • From: "Florian Heigl" <florian.heigl@xxxxxxxxx>
  • Date: Wed, 13 Jun 2007 06:21:36 +0200
  • Delivery-date: Tue, 12 Jun 2007 21:19:44 -0700
  • Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=beta; h=received:message-id:date:from:to:subject:in-reply-to:mime-version:content-type:content-transfer-encoding:content-disposition:references; b=YJQE1W9XPWqeqievsm4tvM051Go8e8+hhX/n8qwQydQDm2sxOYIhA+eWy2JdHK4mY1ODMK7UTS64Zov1q2HQMjTdWSzkogQ7gf6vpu3Aux/t6t/sGiCXZYxHjDVG7Lr/Yhvq3kXXbrL4fjJmDmacG8ANCAOC8AoUvGTBodY0gh4=
  • List-id: Xen user discussion <xen-users.lists.xensource.com>

2007/6/12, Fast Jack <fastjack75@xxxxxxxxx>:
Hi,


On the hardware side we have a number of servers connected to a SAN
via fibre channel. The problem now is that I can't find any definitive
requirements for the virtual block devices and filesystems presented
to the domUs. The documentations doesn't say much on that subject.
I have read a number of example setups ranging from ext3 in LVM
(non-cluster) volumes on the SAN-disk to cluster-aware solutions based
on e.g. OCFS2.

As far as I can tell no concurrent access to each dumUs storage from
multiple hosts takes place during live migration or otherwise. So I'm
wondering whether cluster-save technology like OCFS2, GFS or CLVM are
really necessary to ensure that the domUs filesystem is not corrupted
during migration. If possible I would like to avoid using
cluster-software as it brings with it new points of failure.

low-impact options i'd see (avoiding most of the layers that make up a cluster)
- EVMS  + Cluster Segment Manager
- Redhat CLVM (i think i'd chose that)

from experience the risk of  concurrent access to data segments or
just even disklabels is high and annoying (out of sync kernel labels,
relabeling a disk that looks all unused and empty, udev configuration
errors that shift device names, dozens more to the simplest thing,
someone dd'ing all over your disks.) all these usually go aware using
some kind of cluster, scsi reservations and such. of course these are
extra risk and configuration woes might occur but they haven't been
invented for no reason.

personally, i found ocfs2 is the easiest way out.

So my questions are:
What are the actual requirements on the domUs storage?

must be reachable either by pointing at a file (file/tap:aio) or at
something under /dev (phy) - but the usual write locking (r/r!/w/w!)
in block-attach is only working for a single domU.
so if ever two dom0s start the same domU it will render the backend
storage, whatever type, whatever storage, whatever filesystem into
rubbish after the first metadata update.

Could you give me a few examples of thoroughly tried and tested setups?

Can't give you either of that :p
I hear that heartbeat2 or Redhat cluster suite or Primecluster are
quite tried and tested. Most easier setups have been tested and fail
at some point.

btw:
I don't really know how to figure out if there is a possible
race-condition >between data written by old guest and new guest
reading the same data.

Yes - hopefully the new guest panics. Like someone waking up in the
wrong house, next to the wrong wife.

Florian

--
'Sie brauchen sich um Ihre Zukunft keine Gedanken zu machen'

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.