[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-users] [SOLVED] Lots of udp (multicast) packet loss in domU
Sorry for the top post - but I wanted to let everyone know I solved this issue and how it was solved. By removing the CPU cap (cpu_cap = 600 #Don't use more than 6 real CPUs) everything runs perfectly (well, we lost 20 packets over a 12 hour period, much better than the 1 per second we were losing). You can also apply the settings live with: xm sched-credit -d 1 -w 1024 -c 0 (use your own appropriate weight for your domU). Now... why did this happen? No idea, our domU was running with cpu averages around 400 to 430% - since there were 6 cores in my domU, this would appear to be plenty of cycles left to process incoming UDP traffic. Any XEN experts care to weigh in as to why this would happen. Note we are using the credit scheduler, should we be? Best, --Mike On Mon, Jan 12, 2009 at 5:42 PM "Mike Kazmier" <DaKaZ@xxxxxxxxx> wrote: >Hello, > > After a few of us have spent a week google'ing around for answers, I feel > compelled to ask this question: how do I stop packet loss between my dom0 and > domU? We are currently running a xen-3.3.0 (and have tried with xen-3.2.1) > on a Gentoo system with 2.6.18 kernel for both domU and dom0. We have also > tried a 2.6.25 kernel for dom0 with exactly the same results. > > The goal is to run our multicast processing application in a domU with the > BRIDGED configuration. Note: we do not have any problem if we put the > network interfaces into PCI Passthrough and use them exclusively in the domU, > but that is less than ideal as occasionally other domU's need to communicate > with those feeds. > > So, when we have little (sub 100 Mbps) of multicast traffic coming, > everything is fine. Over that, we start to see packet loss approaching 5%. > But the loss is only seen in the domU. I can run our realtime analysis tool > in the dom0 and domU at the same time, on the same multicast feed and found > that in the dom0, all packets are accounted for. Initially, I found a large > number of dropped frames on the VIF interface, but after running " echo -n > 256 > /sys/class/net/eth2/rxbuf_min " all reported errors have gone away (ie, > dom0 does not report dropping any packets) but we still see loss. > > Any suggestions at all would be greatly appreciated. I know we have > something borked on our end since this white paper shows high domU udp > performace: > http://www.cesnet.cz/doc/techzpravy/2008/effects-of-virtualisation-on-network-processing > - but at the same time a qoogle search shows tons of issues around UDP > packet loss with Xen. Any help would be greatly appreciaited. > > The hardware is an 8 core, 2.33 GHz penryn (Xeon) based system. We have > Intel quad GigE nics with the latest intel IGB drivers. Here is the current > domU config: > > ---------------------------------- > # general > name = "xenF"; > memory = 1024; > vcpus = 6; > builder = "linux"; > on_poweroff = "destroy"; > on_reboot = "restart"; > on_crash = "restart"; > cpu_weight = 1024 # xenF processes get 4 times default (256) priority threads > cpu_cap = 600 #Don't use more than 6 real CPUs > > # This lets us use the xm console > extra = " console=xvc0 xencons=xvc0"; > > # booting > kernel = "/boot/vmlinuz-2.6.18-xen-domU"; > > # virtual harddisk > disk = [ "file:/var/xen/domU-xenF,xvda1,w" ]; > root = "/dev/xvda1 ro"; > > # virtual network > vif = [ "bridge=xenbr0", "bridge=xenbr1", "bridge=xenbr2"] > > > _______________________________________________ > Xen-users mailing list > Xen-users@xxxxxxxxxxxxxxxxxxx > http://lists.xensource.com/xen-users > _______________________________________________ Xen-users mailing list Xen-users@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-users
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |