[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] Storage Question


  • To: "Fajar A. Nugraha" <fajar@xxxxxxxxx>
  • From: Ramon Moreno <rammor1@xxxxxxxxx>
  • Date: Sun, 1 Feb 2009 14:24:32 -0800
  • Cc: xen-users@xxxxxxxxxxxxxxxxxxx
  • Delivery-date: Sun, 01 Feb 2009 14:25:22 -0800
  • Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type:content-transfer-encoding; b=Rv27xdO7M4c89xPDq/l++49R4vVJNM4Rh4y17wEmtrePJXUySXXXr3RTlje8qfroWh iMMkRHq50U+r1zaJ4789MTO5bnOetWtJLmybWiKQy6d+NsviEupRTRw8TO0RowRXaCSj a6dcHtaUxOjE/YE2gypHCff4W07TJIlLVlk3w=
  • List-id: Xen user discussion <xen-users.lists.xensource.com>

> Fajar,
>
> Thanks for the info.
>
> I think option 2 sounds most attractive. I would like to get rid of
> the gfs filesystem. So going nfs with it is a great idea.
>
> As far as the clustering goes, I use the software mainly for the
> following reasons:
>
> * A global view for the redistribution of resources.
> * Automated failover.
>
> Since I have 20 nodes per cluster I am looking at, I need a more
> global view of how things look, and if I become resource constrained,
> I would like the clustering software to make the failover decision
> based on available resources. Only thing I oversubscribe is cpu by
> 50%, so if a host becomes unusable, I would like to failover vms to
> another node based on policy decisions.
>
> Any thoughts on this would also be much appreciated... Thanks for your
> reply. nfs is an excellent idea.
>
> On Sat, Jan 31, 2009 at 5:12 PM, Fajar A. Nugraha <fajar@xxxxxxxxx> wrote:
>> On Sun, Feb 1, 2009 at 5:08 AM, Ramon Moreno <rammor1@xxxxxxxxx> wrote:
>>> Option 1:
>>> I present a single iscsi lun to my systems in the cluster. I then
>>> carve it up using lvm for the vms. The problem with this solution is
>>> that if I clone a vm, it takes mass network bandwidth.
>>
>> True. But for most iscsi servers cloning will take mass amount of
>> resources anyway (whether it's network, disk I/O, or both). Using
>> ionice during cloning process might help by giving cloning process
>> lowest priority.
>>
>>> Option 2:
>>> I present multiple iscsi luns to the systems in the cluster. I still
>>> add to lvm so I dont have to about labeling and such.
>>> Adding to lvm
>>> ensures things dont change with the lun on reboot.
>>
>> I think you can also use /dev/disk/by-path and by-id for that purpose
>>
>>> With this option I
>>> can use the storage layer (using a netapp like solution) to clone luns
>>> and such.
>>
>> If you clone LUNs on storage/target side, then you can't use LVM on
>> the initiator. The cloning process will copy any LVM label on it
>> making the cloned LUN a duplicate PV, which can't be used on the same
>> host.
>>
>>> This eliminates the possibility of saturating the network
>>> interfaces when cloning vms.
>>
>> How does your iscsi server (netapp or whatever) clone a LUN? If it
>> copies data, then you'd still be I/O bound.
>>
>> An exception is if you use zfs-backed iscsi server (like opensolaris)
>> where cloning process requires near-zero I/O with zfs clone.
>>
>> Note that with option 2 you can also avoid using clustering altogether
>> (by putting config files on NFS or synchronizing them manually), which
>> eliminates the need of fencing. This would greatly reduce complexity.
>>
>> Regards,
>>
>> Fajar
>>
>

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.