[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] xen storage options - plase advise



Am Mittwoch, den 03.03.2010, 18:44 -0500 schrieb James Pifer:
> > > Yes, file based domU's are currently on ocfs2 on a SAN. 
> > > 
> > > I don't do any snapshotting right now, but that's not to say I won't
> > > want to some day.
> > 
> > Yes, but you _want_ to be able snapshot your domUs, because of
> > "Hot-backup" of running domUs.
> 
> This is a bit off the topic of my original post, but can you elaborate
> just a little on this? How do you use snapshot for hot-backup? 


i wrote a script that does this job...

this script does:

rotate old backups and create a snapshot of a running domU

  # lvcreate -L 10G -name backup /dev/data/domu-disk

mount that snapshot

  # mount /dev/data/backup

rsync the FS to backup-location

  # rsync -avzH --numeric-ids -e ssh user@server

backup-location holds 6 daily, 4 weekly and 3 monthly backups of each
domu.... the latest backup is also going to tape for archiving....

> > 
> > >  My goal is to get to a point where things are stable
> > > and I can run something to manager everything, ie Orchestrate,
> > > Convirture, something to manage restarting/migrating domU's when one of
> > > the servers has problems. Or I just need to have a server down for
> > > maintenance.
> > 
> > You're searching for openQRM ;-)
> 
> Thanks for this suggestion. Glanced at it briefly and will definitely
> look at it some more. Have you looked at convirt? I liked how that
> looked, but HA features appear to cost extra. 

openQRM is gpl ;-)

> > 
> > > So in general terms, how would I setup LVM (clvm)? Let's say I have two
> > > servers (in this case running sles11). Each server has the SAME
> > > vdisk(LUN) from our Xiotech SAN assigned to it for storage. Let's say
> > > it's a 400gb vdisk. If I add additional servers, they too would be
> > > assigned the same vdisk. Similarly, I could add additional vdisks when
> > > more storage is required. 
> > 
> > You have to care that _EVERY_ dom0 is "seeing" the Storage-LUN!
> > 
> > If your hosts are able to connect to this storage ALL Hosts should be
> > able to use this Volumes (LVols)
> > 
> > > 
> > > Anyway, on the first server I setup LVM. Somehow the second server would
> > > also see that as lvm and be able to mount it?
> > 
> > pvscan 
> > vgscan 
> > vgchange -ay 
> > 
> > should be enough that ALL Hosts could start the domUs residing in this
> > LUN / VG....
> 
> Ok, I may try this tomorrow, but just to clarify, doing this does NOT
> allow me to use snapshotting? Why is that?

i suggest you to use LVM and NOT CLVM. CLVM does not support
snapshotting at all.

For Backup its nice to use one XEN-Host as Backup-Machine, which runs
the backup through lvm-snapshotting.

> 
> So if I have the same 400gb device(LUN) assigned to each server.
> I create a logical volume

LVM is a _Volume Manager_. LVM virtualises your disk(s) in Volume-Groups
and allows to dynamically assign Disk-Space as Logical-Volume. Each
VOlume can hold its own Filesystem.

so you create as much LVols as domU.

in my case

grep phy /etc/xen/domu.cfg

   'phy:/dev/data/domu-swap,sda1,w',
   'phy:/dev/data/domu-disk,sda2,w',


> I create a file system on that logical volume
> Mount that on each server using the same name for the mount point.

No - Each Server is able to see each disk.

> 
> So essentially each server would see /data which is the file system on
> the LVM. Under /data/images I could store my file based domUs.

LVM is _disk-based_ and not usable as filebacked-Store for XEN, but its
smarter to use LVM ;-)

>  That's
> essentially what I'm doing right now on ocfs2. Does snapshotting work on
> ocfs2? If so, how are they different in terms of snapshotting?
> 
> Thanks,
> James


hth,


thomas


_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.