[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RV: [Xen-users] Performance and limitations of virtual bridges


  • To: <xen-users@xxxxxxxxxxxxxxxxxxx>
  • From: Fermín Galán Márquez <fermin.galan@xxxxxxx>
  • Date: Wed, 9 May 2007 18:09:35 +0200
  • Delivery-date: Wed, 09 May 2007 09:08:51 -0700
  • List-id: Xen user discussion <xen-users.lists.xensource.com>
  • Thread-index: AceQ1goO+44Ny3YWSW6xqk2gxwipKQBdiF+w

Hi,

> Is there a limit in the number of interfaces a virtual bridge (created 
> with brctl) can support without having a severe impact in performance?
>
> I guess that there is no absolute answer for that question :), but maybe
> there is some kind of procedure/tool to know the "stress" or "load" that a
> virtual bridge is supporting in a given moment (in a similar way that a
> "top" can show you the CPU load).
>
> My question is due to I'm using a virtual bridge with 14 interfaces (each
> interface correspond to a Xen virtual machine in the same physical host)
> and, given that I'm experiencing transmission delays in the network
> supported by the bridge, I'm suspecting about a loss of performance of it.

As additional fact, I've discovered a significant amount of TX dropped
packets in the ifconfig statistics regarding the bridged interfaces (vif2.0
to vif15.0). Details follow:

vserver:~# brctl show
bridge name     bridge id               STP enabled     interfaces
xenbr0          8000.feffffffffff       no              vif0.0
                                                        peth1
                                                        vif2.0
                                                        vif3.0
                                                        vif4.0
                                                        vif5.0
                                                        vif6.0
                                                        vif7.0
                                                        vif8.0
                                                        vif9.0
                                                        vif10.0
                                                        vif11.0
                                                        vif12.0
                                                        vif13.0
                                                        vif14.0
                                                        vif15.0

For example, for vif2.0, vif7.0 and vif15.0:

vserver:~# ifconfig vif2.0
vif2.0    Link encap:Ethernet  HWaddr FE:FF:FF:FF:FF:FF
          inet6 addr: fe80::fcff:ffff:feff:ffff/64 Scope:Link
          UP BROADCAST RUNNING NOARP  MTU:1500  Metric:1
          RX packets:163814 errors:0 dropped:0 overruns:0 frame:0
          TX packets:1195595 errors:0 dropped:11986 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:14946932 (14.2 MiB)  TX bytes:102413871 (97.6 MiB)

vserver:~# ifconfig vif7.0
vif7.0    Link encap:Ethernet  HWaddr FE:FF:FF:FF:FF:FF
          inet6 addr: fe80::fcff:ffff:feff:ffff/64 Scope:Link
          UP BROADCAST RUNNING NOARP  MTU:1500  Metric:1
          RX packets:174431 errors:0 dropped:0 overruns:0 frame:0
          TX packets:1200822 errors:0 dropped:14597 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:19984777 (19.0 MiB)  TX bytes:104949644 (100.0 MiB)

vserver:~# ifconfig vif15.0
vif15.0   Link encap:Ethernet  HWaddr FE:FF:FF:FF:FF:FF
          inet6 addr: fe80::fcff:ffff:feff:ffff/64 Scope:Link
          UP BROADCAST RUNNING NOARP  MTU:1500  Metric:1
          RX packets:208560 errors:0 dropped:0 overruns:0 frame:0
          TX packets:1225643 errors:0 dropped:16857 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:23837063 (22.7 MiB)  TX bytes:109020766 (103.9 MiB)

Although it could seem a small fraction of the total TX packets (around
1-1.5%), the fact is that for conventional interfaces in the physical host
(eth0, etc.) this counter is exactly 0.

vserver:~# ifconfig eth0.200
eth0      Link encap:Ethernet  HWaddr 00:0E:0C:A0:6D:9F
          inet addr:10.0.0.99  Bcast:10.0.0.255  Mask:255.255.255.0
          inet6 addr: fe80::20e:cff:fea0:6d9f/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:2819 errors:0 dropped:0 overruns:0 frame:0
          TX packets:1806 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:244146 (238.4 KiB)  TX bytes:351857 (343.6 KiB)

So, this could mean that some of the packages that virtual machines are sent
to the network through the virtual bridge are being discarded. Does this
conclusion make any sense? If that's right, then it would mean that the
performance of the bridge is being impacted (and the corollary: 14 virtual
machines in same physical host with the same virtual bridge are too much :)

Best regards, 

--------------------
Fermín Galán Márquez
CTTC - Centre Tecnològic de Telecomunicacions de Catalunya
Parc Mediterrani de la Tecnologia, Av. del Canal Olímpic s/n, 08860
Castelldefels, Spain
Room 1.02
Tel : +34 93 645 29 12
Fax : +34 93 645 29 01
Email address: fermin dot galan at cttc dot es 


_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.