[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] Channel bonding on Guest VM being slow


  • To: xen-devel@xxxxxxxxxxxxxxxxxxx, xen-users@xxxxxxxxxxxxxxxxxxx
  • From: Muhammad Atif <m_atif_s@xxxxxxxxx>
  • Date: Sun, 4 Jan 2009 16:36:06 -0800 (PST)
  • Cc:
  • Delivery-date: Sun, 04 Jan 2009 16:37:18 -0800
  • Domainkey-signature: a=rsa-sha1; q=dns; c=nofws; s=s1024; d=yahoo.com; h=X-YMail-OSG:Received:X-Mailer:Date:From:Reply-To:Subject:To:MIME-Version:Content-Type:Message-ID; b=ejJ+QWXUTmVuJAW8mb5fodqtiK/p+GR/xzDGYNWzQLtHlYCdm9zrpYGXBDVFQ4r0zTnxxTuGb8Uxt/FyercS7ftZ8WasZORw5ijK/+Ap/uyHE9rYRTDkbRbM75ow3nx1xv2yjFQx5PJwfikidRUz9Bh378G2qPH/p4aw9gSmTIg=;
  • List-id: Xen developer discussion <xen-devel.lists.xensource.com>

Hi guys, Greetings for the new year

The problem faced by me is simple, I cannot get channel bonding to work 
perfectly with Xen.

I am trying to boost the bandwidth of guest domains by using link aggregation 
(a.k.a channel bonding). I have tried round robin (mode 0) and high 
availability (mode 1). I know that mode 1 will not give me bandwidth boost. But 
strangely once i try mode 1 for the guest, I get bandwidth almost equal to one 
network interface, but with roundrobin, the bandwidth is cut in half (or is not 
worst compared to active backup). The same configuration yields bandwidth boost 
for dom0 though. The active backup

One lame question is has anyone ever tried to improve the bandwidth of domU's 
using channel bonding and actually achieved that? I am using ubuntu 8.04 and 
Xen version is 3.3.0 compiled from source. I have tried that same with Xen 
3.1.x without any luck. I am quite desperate to achieve that now.

I posted my configuration some days back on the users list, but there was not 
much of help, therefore this time I am posting it on both the lists. I would 
really appreciate any help from you guys.



Some of the configuration is as follows
Routing table for this configuration at dom0 (without Xenbr added) is
 Kernel IP routing table
 Destination    Gateway        Genmask        Flags Metric Ref    Use
 Iface
 192.168.4.0    *              255.255.255.0  U    0      0        0 bond0
 192.168.0.0    *              255.255.255.0  U    0      0        0  eth0
 default        xxxx            0.0.0.0        UG    100    0      0  eth0


 #brctl show
 bridge name    bridge id              STP enabled    interfaces
 eth0            8000.00d06809191a      no              peth0

With Xen bridge called brbond (using bond0)

Kernel IP routing table
 Destination    Gateway        Genmask        Flags Metric Ref    Use
 Iface
 192.168.4.0    *              255.255.255.0  U    0      0        0 brbond
 192.168.0.0    *              255.255.255.0  U    0      0        0 eth0
 default        xxxx            0.0.0.0        UG    100    0        0 eth0

 #brctl show
 bridge name    bridge id              STP enabled    interfaces
 brbond            8000.00d06809191b      no              bond0
 eth0            8000.00d06809191a        no              peth0

 The routing table for domU is.

 Kernel IP routing table
 Destination    Gateway        Genmask        Flags Metric Ref    Use
 Iface
 192.168.4.0    *              255.255.255.0  U    0      0        0 eth1
 192.168.0.0    *              255.255.255.0  U    0      0        0  eth0
 default        xxxx            0.0.0.0        UG    0      0        0  eth0

 After domU creation, brctl show on dom0 looks fine.
 #brctl show
 bridge name    bridge id              STP enabled    interfaces
 brbond          8000.00d06809191b      no              bond0
                                                        vif3.1
 eth0            8000.00d06809191a      no              peth0
                                                        vif3.0
===================================================




      

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.