[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] Xen SAN Questions



Guys, I did start a thread on this before this one. I've been asking about 
using NAS/SAN, local attached vs FC and Ethernet. I've also been asking about 
clustering, redundancy and have been given a lot of good information, 
especially from Fajar who sounds like a guru.

See Optimizing I/O, Distributed vs Cluster and I can't recall the other thread 
now.

Mike


On Tue, 27 Jan 2009 15:58:37 -0200, Ricardo J. Barberis wrote:
> El Martes 27 Enero 2009, Tait Clarridge escribió:
> 
>> Hello Everyone,
>
> Hi, I'm no expert but I'm in the same path as you, so let's try to help each
> other... and get help from others as we go :)
> 
>> I recently had a question that got no responses about GFS+DRBD clusters
>> for
>> Xen VM storage, but after some consideration (and a lot of Googling) I
>> have
>> a couple of new questions.
>
>> Basically what we have here are two servers that will each have a RAID-5
>> array filled up with 5 x 320GB SATA drives, I want to have these as
>> useable
>> file systems on both servers (as they will both be used for Xen VM
>> storage)
>> but they will be replicating in the background for disaster recovery
>> purposes over a GbE link.
>
> OK,
> 
>> First of all, I need to know if this is good practice because I can see a
>> looming clusterf**k if both machines are running VMs from the same shared
>> storage location.
>
> Well, it shouldn't happen if you're using GFS or another cluster aware
> filesystem.
> 
>> Second, I ran a test on two identical servers with DRBD and GFS in a
>> Primary/Primary cluster setup and the performance numbers were appalling
>> compared to local ext3 storage, for example:
>
> Yes, cluster filesystem have lower performance than non-cluster filesystems,
> due to the former performing lokcs on files/dirs.
> Add DRBD replication on top of that and performance will be lower.
> 
>> 5 Concurrent Sessions in iozone gave me the following:
>
>> Average Throughput for Writers per process:
>> EXT3:               41395.96 KB/s
>> DRBD+GFS (2 nodes): 10884.23 KB/s
>
>> Average Throughput for Re-Writers per process:
>> EXT3:               91709.05 KB/s
>> DRBD+GFS (2 nodes): 15347.23 KB/s
>
>> Average Throughput for Readers per process:
>> EXT3:             210302.31 KB/s
>> DRBD+GFS (2 nodes): 5383.27 KB/s  <-------- a bit ridiculous
>
> Ridiculous indeed
> 
>> And more of the same where basically it can range from being 4x to however
>> many times slower reading was. I can only assume that this would be a
>> garbage setup for Xen VM storage and was wondering if anyone could point
>> me
>> to a solution that may be more promising. We currently are running out of
>> space on our NetApp (that does snapshots for backups) for VMs not to
>> mention the I/O available for multiple VMs on a single NetApp directory is
>> already dangerously low.
>
>> Anyone have thoughts as to what might solve my problems?
>
> Have you tried any GFS optimizations? e.g. use noatime and nodiratime,
> disable
> gfs quotas, etc. The first two should improve reading performance.
> 
>> I am thinking a few things:
>
>> - Experiment with DRBD again with another Filesystem (XFS?) and have it
>> re-exported as NFS to both machines (so they can both bring up VMs from
>> the
>> "pool")
>
> I guess NFS could work, unless you have too many machines using it (Linux's
> NFS sucks)
> 
>> - Export one of the machines as iSCSI and software raid it on a primary
>> (not
>> really what I want but might work)
>
> This one sound interesting.
> 
>> - Write a custom script that will backup the VM storage directories to a
>> 3rd
>> server (don't really have the budget for a redundant backup server) using
>> something like rsync
>
>> And finally, what kind of redundant server to server storage do most
>> people
>> use here?
>
> From what I'been reading on the list, most people uses some form of DRBD +
> AoE
> or iSCSI.
> 
> Check the thread with subject "disk backend performance" from November 27,
> 2008. There started a very nice discussion involving Thomas Halinka and
> Stefan de Konink about AoE vs. iSCSI (thank you both!).
> 
> Also, the thread with subject "lenny amd64 and xen" will be of your
> interest,
> on November 27 Thomas started a description of his self-build SAN which is
> very insightful.
> 
>> Thanks a lot for reading my novel of a question :)
>
>> Best,
>
>> Tait Clarridge
>
> Best regards,



_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.