[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] High availability Xen with bonding



On Friday 15 October 2010 13:44:42 Eric van Blokland wrote:
> Hello everyone,
> 
> A few days back I decided to give Ethernet port bonding in Xen another try.
> I've never been able to get it to work properly and after a short search I
> found the network-bridge-bonding script shipped with CentOS-5 probably
> wasn't going to solve my problems. Instead of searching for a tailored
> solution, I thought it would be a good idea to take a leap and try to
> understand Xen networking a bit better and refresh my networking
> knowledge.
> 
> I came up with a configuration I'm currently testing on a pre-production
> environment and I would love to hear from some experts whether what I'm
> doing is sane or that I'm just plain lucky my ping requests are actually
> coming through. Please don't hesitate to give remarks on efficiency,
> security or questioning my sanity. Especially because networking is
> ordinarily not my piece of cake.
> 
> The hardware setup is simple. We have a physical machine with to gigabit
> Ethernet nics, eth0 and eth1, each connected to its own switch. We want to
> achieve failover by bonding (bond0). We're using mode 0 (round robin) to
> gain some bandwidth as well.
> 
> In this test setup, I'm only running one DomU.
> 
> Now for the configuration. I haven't written a wrapper script for the
> purpose of testing and am just using the ifcfgs.
> 
> First of all I've disabled the bridge script in xend-config:
> (network-script '/bin/true')
> 
> Then setup the interfaces:
> 
> # ifcfg-eth0
> DEVICE=eth0
> USERCTL=no
> BOOTPROTO=none
> MACADDR=00:1E:C9:BB:3B:DE
> ONBOOT=yes
> MASTER=bond0
> SLAVE=yes
> 
> # ifcfg-eth1
> DEVICE=eth1
> USERCTL=no
> BOOTPROTO=none
> MACADDR=00:1E:C9:BB:3B:DF
> ONBOOT=yes
> MASTER=bond0
> SLAVE=yes
> 
> # ifcfg-bond0
> DEVICE=bond0
> BOOTPROTO=none
> ONBOOT=yes
> TYPE=Ethernet
> BONDING_OPTS='mode=0 miimon=100'
> BRIDGE=xenbr0
> 
> # ifcfg-xenbr0
> DEVICE=xenbr0
> BOOTPROTO=none
> ONBOOT=yes
> TYPE=Bridge
> STP=OFF
> FD=9
> HELLO=2
> MAXAGE=12
> 
> # ifcfg-veth0
> DEVICE=veth0
> BOOTPROTO=none
> ONBOOT=yes
> IPADDR=192.168.1.11
> NETMASK=255.255.255.0
> GATEWAY=192.168.1.1
> # Our Dom0 will be needing a HW address
> MACADDR=00:16:3E:00:01:00
> 
> # ifcfg-vif0.0
> DEVICE=vif0.0
> BOOTPROTO=none
> ONBOOT=yes
> BRIDGE=xenbr0
> 
> Whatever firewalling I'm going to do, I presume it would be either on bond0
> and/or veth0. Our DomU's will probably do their own firewalling anyway as
> we can't predict what services they will be running. For the sake of
> efficiency and my lack of iptables knowledge, I've changed some bridge
> settings in systctl.conf:
> 
> # Excerpt sysctl.conf
> net.bridge.bridge-nf-filter-vlan-tagged = 0
> net.bridge.bridge-nf-call-ip6tables = 0
> net.bridge.bridge-nf-call-iptables = 0
> net.bridge.bridge-nf-call-arptables = 0
> 
> Finally it seems the ARP request aren't coming through. I fixed that with
> the following commands:
> 
> ip link set eth0 arp off
> ip link set eth1 arp off
> ip link set bond0 arp off
> ip link set vif0.0 arp off
> ip link set vif1.0 arp off
> 
> A wrapper script could do this for me in the future, as I haven't found an
> ifcfg setting to do this for me. The requirement to disable ARP to achieve
> proper, unrouted bridging, limits the use of certain bonding modes, but I
> don't think I care.
> 
> Ifconfig output after setup:
> 
> bond0     Link encap:Ethernet  HWaddr 00:1E:C9:BB:3B:DE
>           inet6 addr: fe80::21e:c9ff:febb:3bde/64 Scope:Link
>           UP BROADCAST RUNNING NOARP MASTER MULTICAST  MTU:1500  Metric:1
>           RX packets:27407897 errors:0 dropped:0 overruns:0 frame:0
>           TX packets:29528747 errors:0 dropped:0 overruns:0 carrier:0
>           collisions:0 txqueuelen:0
>           RX bytes:677011912 (645.6 MiB)  TX bytes:2070391875 (1.9 GiB)
> 
> eth0      Link encap:Ethernet  HWaddr 00:1E:C9:BB:3B:DE
>           UP BROADCAST RUNNING NOARP SLAVE MULTICAST  MTU:1500  Metric:1
>           RX packets:15509051 errors:0 dropped:0 overruns:0 frame:0
>           TX packets:14731788 errors:0 dropped:0 overruns:0 carrier:0
>           collisions:0 txqueuelen:1000
>           RX bytes:2338232100 (2.1 GiB)  TX bytes:3153334765 (2.9 GiB)
>           Interrupt:16 Memory:dfdf0000-dfe00000
> 
> eth1      Link encap:Ethernet  HWaddr 00:1E:C9:BB:3B:DE
>           UP BROADCAST RUNNING NOARP SLAVE MULTICAST  MTU:1500  Metric:1
>           RX packets:11898846 errors:0 dropped:0 overruns:0 frame:0
>           TX packets:14796959 errors:0 dropped:0 overruns:0 carrier:0
>           collisions:0 txqueuelen:1000
>           RX bytes:2633747108 (2.4 GiB)  TX bytes:3212024406 (2.9 GiB)
>           Interrupt:17 Memory:dfef0000-dff00000
> 
> lo        Link encap:Local Loopback
>           inet addr:127.0.0.1  Mask:255.0.0.0
>           inet6 addr: ::1/128 Scope:Host
>           UP LOOPBACK RUNNING  MTU:16436  Metric:1
>           RX packets:23829 errors:0 dropped:0 overruns:0 frame:0
>           TX packets:23829 errors:0 dropped:0 overruns:0 carrier:0
>           collisions:0 txqueuelen:0
>           RX bytes:5148013 (4.9 MiB)  TX bytes:5148013 (4.9 MiB)
> 
> veth0     Link encap:Ethernet  HWaddr 00:16:3E:00:01:00
>           inet addr:192.168.1.11  Bcast:192.168.1.255  Mask:255.255.255.0
>           inet6 addr: fe80::216:3eff:fe00:100/64 Scope:Link
>           UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
>           RX packets:27197523 errors:0 dropped:0 overruns:0 frame:0
>           TX packets:14952082 errors:0 dropped:0 overruns:0 carrier:0
>           collisions:0 txqueuelen:0
>           RX bytes:541462356 (516.3 MiB)  TX bytes:807191362 (769.7 MiB)
> 
> vif0.0    Link encap:Ethernet  HWaddr FE:FF:FF:FF:FF:FF
>           inet6 addr: fe80::fcff:ffff:feff:ffff/64 Scope:Link
>           UP BROADCAST RUNNING NOARP MULTICAST  MTU:1500  Metric:1
>           RX packets:14952092 errors:0 dropped:0 overruns:0 frame:0
>           TX packets:27197458 errors:0 dropped:0 overruns:0 carrier:0
>           collisions:0 txqueuelen:0
>           RX bytes:807192826 (769.7 MiB)  TX bytes:541406194 (516.3 MiB)
> 
> vif1.0    Link encap:Ethernet  HWaddr FE:FF:FF:FF:FF:FF
>           inet6 addr: fe80::fcff:ffff:feff:ffff/64 Scope:Link
>           UP BROADCAST RUNNING NOARP  MTU:1500  Metric:1
>           RX packets:197922 errors:0 dropped:0 overruns:0 frame:0
>           TX packets:245822 errors:0 dropped:0 overruns:0 carrier:0
>           collisions:0 txqueuelen:32
>           RX bytes:193048753 (184.1 MiB)  TX bytes:26837000 (25.5 MiB)
> 
> virbr0    Link encap:Ethernet  HWaddr 00:00:00:00:00:00
>           inet addr:192.168.122.1  Bcast:192.168.122.255 
> Mask:255.255.255.0 inet6 addr: fe80::200:ff:fe00:0/64 Scope:Link
>           UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
>           RX packets:0 errors:0 dropped:0 overruns:0 frame:0
>           TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
>           collisions:0 txqueuelen:0
>           RX bytes:0 (0.0 b)  TX bytes:0 (0.0 b)
> 
> xenbr0    Link encap:Ethernet  HWaddr 00:1E:C9:BB:3B:DE
>           inet6 addr: fe80::21e:c9ff:febb:3bde/64 Scope:Link
>           UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
>           RX packets:48495 errors:0 dropped:0 overruns:0 frame:0
>           TX packets:5 errors:0 dropped:0 overruns:0 carrier:0
>           collisions:0 txqueuelen:0
>           RX bytes:3432718 (3.2 MiB)  TX bytes:398 (398.0 b)
> 
> I hope I haven't forgot a vital setting. Please let me know what you think.
> 
> Kind regards,
> 
> Eric


Eric,

curious to see your progress in this. Up till now I tackled network redundancy 
with multipathing, not with bonding. However, this does not provide added 
speed, though theoretically it should. So I recently decided to switch to 
bonding for the hypervisors in their connections to iSCSI, using rr and 
running over seperate switches, just like you but I'm not at the point of 
installing domU's, so I can't really comment on how and if it works. Will know 
next week though so I will return to this post with my findings...

B.








_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.