[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-users] Xen network performance issue



Hi all,

I've got a stable working xen platform which has been working well for some time, but I recently converted a linux physical machine to a VM and have an issue with networking.

This VM required 2 x network interfaces (it is a firewall machine), one from the "Internet" and the second for the LAN.

The dom0 (physycal machine) has this config:
auto lo
iface lo inet loopback

# The primary network interface
allow-hotplug eth0
auto xenbr0
iface xenbr0 inet static
    address 10.10.10.34
    netmask 255.255.240.0
    gateway 10.10.10.254
    bridge_maxwait 5
    bridge_ports regex eth0

auto xenbr5
iface xenbr5 inet manual
    bridge_ports eth0.5

So actually, xenbr5 is based on eth0.5 which is configured on the switch as a vlan (number 5), the WAN router is connected as untagged for vlan5 and not a member of any other vlan. The dom0 machines are configured with untagged for vlan4 (normal LAN network) and tagged for vlan5.

If I migrate the domU to another physical machine, the problem moves to the other machine, it also affects all VM's (incl the dom0) for the physical machine this new "mail" vm is on.

brctl show
bridge name    bridge id        STP enabled    interfaces
xenbr0        8000.f46d04efe254    no        eth0
                            vif6.0
                            vif6.0-emu
xenbr5        8000.f46d04efe254    no        eth0.5
                            vif6.1
                            vif6.1-emu
route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref Use Iface
0.0.0.0         10.30.10.254    0.0.0.0         UG    0 0        0 xenbr0
10.30.0.0       0.0.0.0         255.255.240.0   U     0 0        0 xenbr0

kernel        = "/usr/lib/xen-4.1/boot/hvmloader"
builder        = 'hvm'
device_model    = '/usr/lib/xen-4.1/bin/qemu-dm'
boot        = 'dc'
localtime    = 1
vnc        = 1
vncviewer    = 0
vncconsole    = 0
vncdisplay    = 9
vncunused    = 0
stdvga        = 0
acpi        = 1
apic        = 1
name        = "mail"
hostname    = 'mail'
disk        = ['phy:/dev/mapper/mpathmail,xvda,w' ]
memory        = 2048
cpus        = "4,5" # Which physical CPU's to allow
vcpus        = 2     # How many Virtual CPU's to present
vif = ['bridge=xenbr5, mac=00:16:3e:43:a8:09', 'bridge=xenbr0, mac=00:16:3e:43:d8:09']

The problem can be seen by pinging either the physical machine, or the VM's IP, with ping times around a few ms, and then escalating to 5 seconds or more, and then reducing back to normal, etc...
ping 10.10.10.34
PING 10.10.10.34 (10.10.10.34) 56(84) bytes of data.
64 bytes from 10.10.10.34: icmp_seq=1 ttl=64 time=0.289 ms
64 bytes from 10.10.10.34: icmp_seq=2 ttl=64 time=0.277 ms
64 bytes from 10.10.10.34: icmp_seq=3 ttl=64 time=0.281 ms
64 bytes from 10.10.10.34: icmp_seq=4 ttl=64 time=340 ms
64 bytes from 10.10.10.34: icmp_seq=5 ttl=64 time=0.260 ms
64 bytes from 10.10.10.34: icmp_seq=6 ttl=64 time=79.9 ms
64 bytes from 10.10.10.34: icmp_seq=7 ttl=64 time=0.269 ms
64 bytes from 10.10.10.34: icmp_seq=8 ttl=64 time=0.264 ms
64 bytes from 10.10.10.34: icmp_seq=9 ttl=64 time=182 ms
64 bytes from 10.10.10.34: icmp_seq=10 ttl=64 time=311 ms
64 bytes from 10.10.10.34: icmp_seq=11 ttl=64 time=717 ms
64 bytes from 10.10.10.34: icmp_seq=12 ttl=64 time=1029 ms
64 bytes from 10.10.10.34: icmp_seq=13 ttl=64 time=1422 ms
64 bytes from 10.10.10.34: icmp_seq=14 ttl=64 time=1725 ms
64 bytes from 10.10.10.34: icmp_seq=15 ttl=64 time=1627 ms
64 bytes from 10.10.10.34: icmp_seq=16 ttl=64 time=2080 ms
64 bytes from 10.10.10.34: icmp_seq=17 ttl=64 time=2385 ms
64 bytes from 10.10.10.34: icmp_seq=18 ttl=64 time=2375 ms
64 bytes from 10.10.10.34: icmp_seq=19 ttl=64 time=2876 ms
64 bytes from 10.10.10.34: icmp_seq=20 ttl=64 time=2830 ms
64 bytes from 10.10.10.34: icmp_seq=21 ttl=64 time=2418 ms
64 bytes from 10.10.10.34: icmp_seq=22 ttl=64 time=1420 ms
64 bytes from 10.10.10.34: icmp_seq=23 ttl=64 time=421 ms
64 bytes from 10.10.10.34: icmp_seq=24 ttl=64 time=0.292 ms
64 bytes from 10.10.10.34: icmp_seq=25 ttl=64 time=0.286 ms
64 bytes from 10.10.10.34: icmp_seq=26 ttl=64 time=0.257 ms
^C
--- 10.10.10.34 ping statistics ---
26 packets transmitted, 26 received, 0% packet loss, time 25016ms
rtt min/avg/max/mdev = 0.257/932.656/2876.987/1016.327 ms, pipe 3

On dom0, if I run "tcpdump -tn -i eth0" (or xenbr0) then I do not see any packets that should be on the WAN side (ie, packets for the WAN VLAN don't seem to be leaking out), if I run "tcpdump -tn -i eth0.5 (or xenbr5) then equally I don't see any of the LAN packets, and only see the WAN packets.

One thought I had was that perhaps I should use a specific network card type, by default it seems to be using a rtl8139, though since it is impacting dom0, I don't think how xen presents the card to the domU should make any difference.

I'm assuming I've somehow managed to create a loop, or something equally stupid somewhere, but I'm running out of places to look, and not sure how to work it out. Any assistance would be greatly appreciated.

Regards,
Adam

--
Adam Goryachev Website Managers www.websitemanagers.com.au

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxx
http://lists.xen.org/xen-users


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.