[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] moving LV's devices to a SAN server.


  • To: Javier Guerra <javier@xxxxxxxxxxx>
  • From: Israel Garcia <igalvarez@xxxxxxxxx>
  • Date: Mon, 2 Nov 2009 16:31:58 -0500
  • Cc: "Fajar A. Nugraha" <fajar@xxxxxxxxx>, Xen Users <Xen-users@xxxxxxxxxxxxxxxxxxx>
  • Delivery-date: Mon, 02 Nov 2009 13:32:48 -0800
  • Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type:content-transfer-encoding; b=mtBTc02FIezuBDhNGuvcJqzwBMltg3DHkC5P+HB6JGcp1vyvQOjy3wo7VfT4MX2SuW 4GoFNWu5Iu82utEDFtVQl2oBJKJtc3R6LY+TFAhLGSZLZemek3gBDb+/IN9q1FbpQT95 0VLKU4g3r7a7Qt7NhFB7Oxaz614eFEmtqjX6Y=
  • List-id: Xen user discussion <xen-users.lists.xensource.com>

On 11/2/09, Javier Guerra <javier@xxxxxxxxxxx> wrote:
> On Mon, Nov 2, 2009 at 11:39 AM, Israel Garcia <igalvarez@xxxxxxxxx> wrote:
>> Can you help me?
>
> not much, unfortunately.  even if there are some standards, compliance
> is spotty at best, so you'll have to test if your devices collaborate.
>
Hi Javier,

Very interesting your comment about link aggregation. thanks  :-)
I think this setup (using link aggregation in both sides) is the best
way to achieve higher (1GBE) bandwidth over a ethernet network serving
SAN boxes. I've searched the web a lot and I haven't found other best
setup. I'm going to test all LACP/bonding and  If it's possible I'll
send the list some  results.

thanks again.

regards,
Israel.

> in any case, this was my reasoning for mentioning port aggregation (or
> more precise, Link aggregation):
>
> - the 'usual' topology for all things ethernet (including iSCSI), is
> to simply put the switches at the middle and pull one cable to each
> host.
>
> - in a SAN, this creates a bottleneck since it's common to have just
> one or two storage boxes for several hosts (specially when just
> starting!).  The single Ethernet port going to the storage box limits
> the total access bandwidth to just 1Gb for all hosts.
>
> - most iSCSI devices currently include several (4-6) GbE ports.
>
> - the naÃve way to use all these ports would be to ditch the Ethernet
> switch, and just connect one host on each port.  This gives you 1Gb
> dedicated for each host, and the total data bandwidth is limited to
> the platter and internal backbone speeds.
>
> - unfortunately, this strategy is too limiting for later growth.  Not
> only you have a limited number of ports, but it also makes nearly
> impossible to add a second storage box.
>
> - so, what you can do is to keep the central switch, plug each host on
> a single port of the switch; but for the storage box, use several
> ports connected to several ports in the switch.  if the link
> aggregation features of both the storage box and the switch match, now
> you have a single very fat link between the box and the switch.  From
> the point of view of the hosts, it's exactly the same as the 'usual'
> topology (one device on each switch port); but a single host won't be
> able to saturate the storage bandwith.
>
> - expandability also isn't impaired, you can add extra hosts without
> any change, and also extra storage just by creating extra link
> aggregation groups.
>
> hope it helps, at least in clarifying the general concepts.  for
> details you'll have to consult the docs of both your storage box and
> switches, and experiment a lot!
>
> --
> Javier
>


-- 
Regards;
Israel Garcia

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.