[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [Xen-users] Lots of udp (multicast) packet loss in domU


  • To: James Harper <james.harper@xxxxxxxxxxxxxxxx>
  • From: Mike Kazmier <DaKaZ@xxxxxxxxx>
  • Date: Wed, 14 Jan 2009 01:52:39 +0000
  • Cc: xen-users@xxxxxxxxxxxxxxxxxxx
  • Delivery-date: Tue, 13 Jan 2009 17:53:44 -0800
  • Domainkey-signature: a=rsa-sha1; s=mail; d=zenbe.com; c=simple; q=dns; b=bCziRiQUMRLBujCWDT0Cp1SJB84K3eIZk/oJo2Sq8J4ccXWrPqmwjRERpJ+mvu1ed D++NBmdpfD+djRdR3L3pxxBFS2L8aZNibS92INysklAyURTVEXsGnX3KlOWK5XyFgrg Os/ykiX5YClqSUr8wBdnfeAEurh8w8HIoDW8a5M=
  • List-id: Xen user discussion <xen-users.lists.xensource.com>

Thanks for the reply James, there are some comments below but let me start by 
stating that indeed this problem is NOT solved.  When we removed the CPU cap we 
only moved on to the NEXT problem.  Here is the issue now, we are still getting 
massive packet loss (now upwards of 60%) and it appears the bridge is culprit.  
Again, we have approximately 600 Mbps of multicast (udp) traffic we are trying 
to pass TO and then back FROM a domU, and then other domU's occasionally grab 
and use this traffic.  Each domU that starts seems to consume >80% cpu of the 
dom0 - but only if it is passed the bridged ethernet ports.  So, maybe our 
architecture just isn't supported?  Should we be using a routed configuration 
(with xorp for a multicast router?) and/or just use PCI-Passthrough?  We don't 
see any such issues in PCI passthrough, but then our domU's have to be 
connected via an external switch, and this is something we were hoping to 
avoid.  Any advice here would be great.

On Mon, Jan 12, 2009 at 5:51 PM "James Harper" <james.harper@xxxxxxxxxxxxxxxx> 
wrote:
>> Hello,
> > 
> > After a few of us have spent a week google'ing around for answers, I
> feel
> > compelled to ask this question: how do I stop packet loss between my
> dom0
> > and domU?  We are currently running a xen-3.3.0 (and have tried with
> xen-
> > 3.2.1) on a Gentoo system with 2.6.18 kernel for both domU and dom0.
> We
> > have also tried a 2.6.25 kernel for dom0 with exactly the same
> results.
> > 
> > The goal is to run our multicast processing application in a domU with
> the
> > BRIDGED configuration.  Note: we do not have any problem if we put the
> > network interfaces into PCI Passthrough and use them exclusively in
> the
> > domU, but that is less than ideal as occasionally other domU's need to
> > communicate with those feeds.
> > 
> 
> Googling has probably already lead you to these tips but just in case:
> 
> Try 'echo 0 > bridge-nf-call-iptables' if you have't already. This will
> stop bridged traffic traversing any of your iptables firewall rules. If
> you are using ipv6 then also 'echo 0 > bridge-nf-call-ip6tables'

Tried this - no effect - we have no rules in place.

> Another thing to try is turning off checksum offloading. I don't think
> it is likely to make much difference but due to the little effort
> required it's probably worthwhile. (ethtool -k to see what settings are
> on, ethtool -K to modify them)

Again, no difference here.

> Also try pinning Dom0 and DomU to separate physical CPU's. Again I don't
> think this is likely to make much difference but it's easy to test.

Did this, also pinned domU to unused CPUs.  Again, no effect.

--Mike


_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.