[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [Xen-users] Lots of udp (multicast) packet loss in domU


  • To: "Mike Kazmier" <DaKaZ@xxxxxxxxx>
  • From: "James Harper" <james.harper@xxxxxxxxxxxxxxxx>
  • Date: Wed, 14 Jan 2009 13:11:28 +1100
  • Cc: xen-users@xxxxxxxxxxxxxxxxxxx
  • Delivery-date: Tue, 13 Jan 2009 18:12:49 -0800
  • List-id: Xen user discussion <xen-users.lists.xensource.com>
  • Thread-index: Acl16t6wVCBxBzpVQn+OVO99lxww0gAAaIXg
  • Thread-topic: [Xen-users] Lots of udp (multicast) packet loss in domU

> 
> Thanks for the reply James, there are some comments below but let me
start
> by stating that indeed this problem is NOT solved.  When we removed
the
> CPU cap we only moved on to the NEXT problem.  Here is the issue now,
we
> are still getting massive packet loss (now upwards of 60%) and it
appears
> the bridge is culprit.  Again, we have approximately 600 Mbps of
multicast
> (udp) traffic we are trying to pass TO and then back FROM a domU, and
then
> other domU's occasionally grab and use this traffic.  Each domU that
> starts seems to consume >80% cpu of the dom0 - but only if it is
passed
> the bridged ethernet ports.  So, maybe our architecture just isn't
> supported?  Should we be using a routed configuration (with xorp for a
> multicast router?) and/or just use PCI-Passthrough?  We don't see any
such
> issues in PCI passthrough, but then our domU's have to be connected
via an
> external switch, and this is something we were hoping to avoid.  Any
> advice here would be great.

I'm not sure what version the multicast stuff was introduced in, maybe
3.2.1, but before that multicast traffic was treated as broadcast
traffic and so echo'd to every domain.

These days, each DomU network interface should be making Dom0 aware of
what multicast traffic it should be receiving, so unless your domU
kernels are old that shouldn't be your problem, but maybe someone else
can confirm that multicast is definitely in place?

I think that bridging is definitely much lighter on CPU usage than
routing, so I don't think that going down that path would help you in
any way.

What size are your packets? If they are VoIP then you are probably stuck
with tiny packets, but if they are something else that can use big
packets then you may be able to improve things by increasing your MTU up
to 9000, although that is a road less travelled and may not be
supported.

But maybe Xen isn't the right solution to this problem?

James

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.