[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [Xen-users] RAID10 Array


  • To: "Robert Dunkley" <Robert@xxxxxxxxx>, <xen-users@xxxxxxxxxxxxxxxxxxx>
  • From: "Jonathan Tripathy" <jonnyt@xxxxxxxxxxx>
  • Date: Thu, 17 Jun 2010 14:19:23 +0100
  • Cc:
  • Delivery-date: Thu, 17 Jun 2010 06:22:17 -0700
  • List-id: Xen user discussion <xen-users.lists.xensource.com>
  • Thread-index: AcsOHXsjWAA6YdqQTROvz0Ps/pqoiwAAIlpVAAAJaUAAAGJ8pA==
  • Thread-topic: [Xen-users] RAID10 Array

Hi Rob,
 
And if I was to use let's say 4 teamed ports coming out of the storage server, and 2 teamed ports going into the xen node, would the max I'd get be still 1Gbit?
Thanks


From: Robert Dunkley [mailto:Robert@xxxxxxxxx]
Sent: Thu 17/06/2010 14:15
To: Jonathan Tripathy
Subject: RE: [Xen-users] RAID10 Array

Hi Jonathan,

 

LACP and 802.3AD are used together on those HP Soho switches. I might be wrong but LACP I think allows automatic negotiation to some degree at the switch side.

 

I have used LACP with Broadcom based NICs in Windows and the HP switch you are looking at. You only need to enable LACP on the switch ports plugged into your disk box and then the software on the server should be able to sort the rest (I enabled it with Broadcom NICs under Windows and it worked as advertised).

 

 

Rob

 

From: xen-users-bounces@xxxxxxxxxxxxxxxxxxx [mailto:xen-users-bounces@xxxxxxxxxxxxxxxxxxx] On Behalf Of Jonathan Tripathy
Sent: 17 June 2010 14:07
To: Adi Kriegisch; xen-users@xxxxxxxxxxxxxxxxxxx
Subject: RE: [Xen-users] RAID10 Array

 


 

 


From: Adi Kriegisch [mailto:kriegisch@xxxxxxxx]
Sent: Thu 17/06/2010 14:03
To: Jonathan Tripathy
Cc: xen-users@xxxxxxxxxxxxxxxxxxx
Subject: Re: [Xen-users] RAID10 Array

Hi!

> Looking at this page https://help.ubuntu.com/community/HighlyAvailableAoETarget
> they seem to have made a linux "bond" called bond0 and are telling the AoE
> target to use that. This confuses me...
> Would it be of any benifit to create a "mode 4" bond and use 802.3ad with ATA
> over Ethernet? Or would that be just a waste, when AoE can use the interfaces
> directly?
ggaoed for example can handle multiple interfaces in the configuration and
is designed to deliver highest performance with for example automatically
load balancing over several NICs.
If you want to use vblade you might be better off using bonding because
vblade cannot handle several interfaces in one instance. You'll get another
performance penalty when using several instances of vblade listening on
different interfaces.
I am not sure if LACP enhances performance in your case: I think from one
server to the other you will only get 1GBit; for LACP to work as expected
you need many-to-many or many-to-one connections. All pakets belonging to a
connection will use the same wire. This article has some
details: http://serverfault.com/questions/8512/multiplexed-1-gbps-ethernet
also Wikipedia has some information on this.

Another thing is that you loose the ability of having a redundancy in the
switching backend.

-- Adi

-------------------------------------------------------------------------------------------------------------------

So if I use ggaoed and just put all 4 NICs into its config file, that should allow me to get 4Gbit of bandwidth? And no configuration is required on the switch?

BTW, does 802.3ad "mode 4" use LACP? Or I am getting mixed up?

The SAQ Group

Registered Office: 18 Chapel Street, Petersfield, Hampshire GU32 3DZ
SAQ is the trading name of SEMTEC Limited. Registered in England & Wales
Company Number: 06481952

 

http://www.saqnet.co.uk AS29219

SAQ Group Delivers high quality, honestly priced communication and I.T. services to UK Business.

Broadband : Domains : Email : Hosting : CoLo : Servers : Racks : Transit : Backups : Managed Networks : Remote Support.

 

 SAQ Group

 

ISPA Member

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.