[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-users] Vif dropping packets
Basically i wanted to know if any resource consyraints exist. Do packet drops occur all the time? 1. What about with a steady state load? 2. What about no load? 3. What about a heavy load?There's an IBM study that shows xen systems with more than 12vms AND CPU overcommitnent can have unusual delays responding to requests. I've seen similar behavior but only on a busy host with busy vms. I think that you need more data. On Jun 26, 2009, at 6:26 PM, Mike Lovell <mike@xxxxxxxxxxxx> wrote: I'm not entirely sure what you are asking with that second question. sar is installed in the guests. I did `sar -u1 1000` to watch the status for a while. Here is an abbreviated output with the averages and the line I saw with the highest %steal04:13:12 PM CPU %user %nice %system %iowait %steal %idle 04:13:16 PM all 0.00 0.00 0.00 0.00 0.99 99.01 Average: all 0.01 0.00 0.02 0.55 0.05 99.37Overall, these vms aren't doing a lot. Mainly sitting idle waiting for a QA engineer to test code functionality. I should probably say a little more about the config. The host boxes are dual quad-core Opteron 2346HE systems with 24GB of memory and a ton of disks. Currently there are 28 virtual machines running on the one I have been checking but most of them idle.Hope that helps or is what you were looking for. mike Peter Booth wrote:Mike,You don't say anything about the workload on your system or the resource consumption that it causes.Do you have sar installed on the domUs? Does a one second vmstat show a %st that's above 1%?Sent from my iPhone On Jun 26, 2009, at 4:22 PM, Mike Lovell <mike@xxxxxxxxxxxx> wrote:I have a problem with packets being dropped on some vif interfaces. I currently have a box running Debian Lenny with Xen 3.2.1. The host is running the Xen-ified kernel from the Debian repos. 2.6.26-1-xen-amd64. The host has a single bridge, 'vmnet', that connects to one physical interface and all of the guest's interfaces connect to. If I do a 'ifconfig vifX.0', I see dropped TX packets. Looking at the guest does not show any dropped packets on its interface.From the host # cat /etc/network/interfaces auto vmnet iface vmnet inet static address 10.135.2.71 netmask 255.255.255.224 bridge_ports eth1 bridge_stp off bridge_fd 9 bridge_hello 2 bridge_maxage 12 # brctl show vmnet bridge name bridge id STP enabled interfaces vmnet 8000.003048c8166d no eth1vif22.0 ... <lots of interfaces># ifconfig vif22.0vif22.0 Link encap:Ethernet HWaddr fe:ff:ff:ff:ff:ff inet6 addr: fe80::fcff:ffff:feff:ffff/64 Scope:LinkUP BROADCAST RUNNING PROMISC MULTICAST MTU:1500 Metric:1 RX packets:7946252 errors:0 dropped:0 overruns:0 frame:0 TX packets:8282858 errors:0 dropped:160 overruns:0 carrier:0 collisions:0 txqueuelen:32 RX bytes:5758663699 (5.3 GiB) TX bytes:5887860418 (5.4 GiB) From the guest # ifconfig eth0eth0 Link encap:Ethernet HWaddr 00:16:3E:02:00:91 inet addr:10.135.2.91 Bcast:10.135.2.127 Mask:255.255.255.192inet6 addr: fe80::216:3eff:fe02:91/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:8283072 errors:0 dropped:0 overruns:0 frame:0 TX packets:7946364 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:5887884877 (5.4 GiB) TX bytes:5869927068 (5.4 GiB) # cat /etc/network/interfaces auto eth0 iface eth0 inet dhcp post-up ethtool -K eth0 tx offI am a stumped on this and it is causing some problems for some of the applications that are being tested across the virtual machines. For example, one app on one vm needs to connect to a database, on another vm, but it can't and complains. Do any of you know what might be causing this or any ways to fix this? Thanks in advance.mike_______________________________________________ Xen-users mailing list Xen-users@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-users _______________________________________________ Xen-users mailing list Xen-users@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-users
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |