[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] RAID10 Array



On Thursday 17 June 2010 09:32:37 Adi Kriegisch wrote:
> Hi!
> 
> > I have 3 RAID ideas, and I'd appreciate some advice on which would be
> > better for lots of VMs for customers.
> >
> > My storage server will be able to hold 16 disks. I am going to export 1
> > iSCSI LUN to each xen node. 6 nodes will connect to one storage server,
> > so that's 6 LUNs per server of equal size. The server will connect to a
> > switch using quad port bonded NICs (802.3ad), and each Xen node will
> > connect to the switch using Dual port bonded NICs.
> 
> hmmm... with one LUN per server you will loose the ability to do live
> migration -- or do I miss something?
> Some people mention problems with bonding more than two NICs for iSCSI as
> the reordering of the commands/packets adds tremendously to latency and
>  load. If you want high performance and avoid latency issues you might want
>  to choose ATA-over-Ethernet.

If I understand correctly, you could do live migration, but you would have to 
migrate them all at once.

> > I'd appreciate any thoughts or ideas on which would be best for
> > throughput/IPOS
> 
> Your server is a Linux box exporting the RAIDs to your Xen servers? Then
> just take fio and do some benchmarking. If you're using software raid than
> you might want to add RAID5 to the equation.
> I'd suggest to measure performance of your RAID system with various
> configurations and then choose which level of isolation gives the best
> performance.
> I don't think a setup with 6 hot spare disks is necessary -- at least not
> when they're connected to the same server. Depending on the quality of your
> disks 1 to 3 should suffice. With eg. 1 hot spare in the server plus some
> cold spares in your office you should be able to survive a broken harddisk.
> You should also "smartctl -t long" your disks frequently (ie once per week)
> and do more or less permanent resync of your raid to be able to detect
> disk errors early. (The worst case scenario is to never check your disks --
> then a disk is broken and replaced by a hot/cold spare -- and raid resync
> fails other disks on your array, just because the bad blocks are already
> there...)

I've been following Jonathan postings for a while and my general feeling is 
that there's quite some difference into what he aims for and what reality 
offers as boundaries. I wish him luck anyway, it would be cool if he could get 
things working. By the way, I will post my planned setup in response to one of 
his other postings, might be useful to compare. 

> Hope this helps
> 
> -- Adi
> 
> _______________________________________________
> Xen-users mailing list
> Xen-users@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-users
> 


B.

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.