[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] Storage Question



On Sun, Feb 1, 2009 at 5:08 AM, Ramon Moreno <rammor1@xxxxxxxxx> wrote:
> Option 1:
> I present a single iscsi lun to my systems in the cluster. I then
> carve it up using lvm for the vms. The problem with this solution is
> that if I clone a vm, it takes mass network bandwidth.

True. But for most iscsi servers cloning will take mass amount of
resources anyway (whether it's network, disk I/O, or both). Using
ionice during cloning process might help by giving cloning process
lowest priority.

> Option 2:
> I present multiple iscsi luns to the systems in the cluster. I still
> add to lvm so I dont have to about labeling and such.
> Adding to lvm
> ensures things dont change with the lun on reboot.

I think you can also use /dev/disk/by-path and by-id for that purpose

> With this option I
> can use the storage layer (using a netapp like solution) to clone luns
> and such.

If you clone LUNs on storage/target side, then you can't use LVM on
the initiator. The cloning process will copy any LVM label on it
making the cloned LUN a duplicate PV, which can't be used on the same
host.

> This eliminates the possibility of saturating the network
> interfaces when cloning vms.

How does your iscsi server (netapp or whatever) clone a LUN? If it
copies data, then you'd still be I/O bound.

An exception is if you use zfs-backed iscsi server (like opensolaris)
where cloning process requires near-zero I/O with zfs clone.

Note that with option 2 you can also avoid using clustering altogether
(by putting config files on NFS or synchronizing them manually), which
eliminates the need of fencing. This would greatly reduce complexity.

Regards,

Fajar

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.