[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-API] lacp network bonding not using all NICs

  • To: "xen-api@xxxxxxxxxxxxxxxxxxx" <xen-api@xxxxxxxxxxxxxxxxxxx>
  • From: Brian Menges <bmenges@xxxxxxxxxx>
  • Date: Wed, 27 Mar 2013 22:30:39 -0700
  • Accept-language: en-US
  • Acceptlanguage: en-US
  • Delivery-date: Thu, 28 Mar 2013 05:30:57 +0000
  • Domainkey-signature: a=rsa-sha1; q=dns; c=nofws; s=january; d=gogrid.com; h=Received:Received:From:To:Date:Subject:Thread-Topic:Thread-Index:Message-ID:References:In-Reply-To:Accept-Language:Content-Language:X-MS-Has-Attach:X-MS-TNEF-Correlator:acceptlanguage:Content-Type:Content-Transfer-Encoding:MIME-Version; b=QcWlt7P7UlKxF58JAt6d6iLDBX11pz5Kz0Ogx9wV2Pi42rpRfsM8rRvHGIkubRNy1kJUX1m3/dZG9nGmSNSZV2NxuyTJFIpIdiov0+xx6TxUjgaUhVBP03u2GZnl8zRw;
  • List-id: User and development list for XCP and XAPI <xen-api.lists.xen.org>
  • Thread-index: Ac4rbhajSM/3NQ1cTA6ZQPGJx2xWCwABlIm8
  • Thread-topic: [Xen-API] lacp network bonding not using all NICs

LACP can sometimes be a little misleading.

Similar to what Ben said, LACP can be configured on a per-flow basis (L3 
IP:Port xmit/recv hash) or on a hardware flow basis (MAC to MAC hash).

Generally speaking, a L3 IP:Port xmit/recv hash is the optimal, and 
traditionally default for most applications, but do note where/how/who is 
servicing that mount. If XCP is servicing the mount, and the source_ip:port 
combination is always the same, and the destination_ip:port is always the same, 
then LACP will rightfully put all flows on a single link. If XCP services one 
mount to one large storage pool, that's one socket/flow and therefore will have 
a one-link maximum throughput. If XCP services many connections (even to the 
same server, but multiple connections) then the source port will be different, 
meaning LACP can hash a different link for that flow.

>From your paste, it looks like LACP is doing justice to your configuration, 
>pushing the high flow on one link and the other flows to the remaining link.

Brian S. Menges
Principal Engineer, DevOps
2 Harrison, Suite 200|San Francisco, CA|94105
D 415.869.7000|F 415.869.7001
From: xen-api-bounces@xxxxxxxxxxxxx [xen-api-bounces@xxxxxxxxxxxxx] On Behalf 
Of Ben Pfaff [blp@xxxxxxxxxxxxxxx]
Sent: Wednesday, March 27, 2013 21:36
To: xen-api@xxxxxxxxxxxxxxxxxxx
Subject: Re: [Xen-API] lacp network bonding not using all NICs

Carlos Reategui <creategui@xxxxxxxxx>

> According to ovs:
> # ovs-appctl bond/show bond1
> bond_mode: balance-tcp
> bond-hash-algorithm: balance-tcp
> bond-hash-basis: 0
> updelay: 31000 ms
> downdelay: 200 ms
> next rebalance: 709579 ms
> lacp_negotiated: true
> slave eth1: enabled
>         active slave
>         may_enable: true
>         hash 0: 0 kB load
>         hash 81: 1382 kB load
>         hash 85: 2419 kB load
>         hash 157: 0 kB load
>         hash 189: 2378 kB load
>         hash 253: 0 kB load
> slave eth0: enabled
>         may_enable: true
>         hash 89: 0 kB load
>         hash 222: 4069129 kB load
> So as you can see all my traffic is going out a single NIC.  Is there a
> different bond-hash-algorithm I should use?

It looks like you only have a few flows, with one of those flows
having dramatically more traffic than the rest.  If that's true,
then I don't know how OVS could do better.  It looks like it's
doing the best job it can: put the big flow on one NIC and the
rest on the other NIC.

Xen-api mailing list

The information contained in this message, and any attachments, may contain 
confidential and legally privileged material. It is solely for the use of the 
person or entity to which it is addressed.  Any review, retransmission, 
dissemination, or action taken in reliance upon this information by persons or 
entities other than the intended recipient is prohibited. If you receive this 
in error, please contact the sender and delete the material from any computer.

Xen-api mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.