[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [Xen-users] Lots of udp (multicast) packet loss in domU


  • To: xen-users@xxxxxxxxxxxxxxxxxxx
  • From: Mike Kazmier <DaKaZ@xxxxxxxxx>
  • Date: Wed, 14 Jan 2009 15:14:29 +0000
  • Delivery-date: Wed, 14 Jan 2009 07:15:09 -0800
  • Domainkey-signature: a=rsa-sha1; s=mail; d=zenbe.com; c=simple; q=dns; b=h9Jb00GVCcAVs1l9znVyupwBC0CZWJ/ZzPR2kmjtHzbSE/0Xj40OPJIuwVQD9rJah Zdt8lD0Xsiw0nktsijalsL6xMsPWNW98SnxIordfRScAfnnx0JEXrk+ijW0T5m4g5Pb MP/a7swmUsZTTM/QgCUE/7CJYNeM66bh2IBhh4w=
  • List-id: Xen user discussion <xen-users.lists.xensource.com>

Hello again James,

> These days, each DomU network interface should be making Dom0 aware of
> what multicast traffic it should be receiving, so unless your domU
> kernels are old that shouldn't be your problem, but maybe someone else
> can confirm that multicast is definitely in place?

Hmmm, how would I verify that?  As far as I can see the dom0 is in constant 
promiscuous mode so that it can pass all bridged traffic.  This doesn't really 
matter though, I actually do need all the traffic I am receiving.  The problem 
is that the load is exorbitant between dom0 and domU.  I mean, with 600 Mbps of 
network IO, dom0 consumes an entire 5310 core (2.33 GHz penryn).  Whereas if I 
pin that interface into the domU via PCI-Passthrough, we only get a 5% cpu load 
to ingest that traffic.  I don't know if its important or not, but in the dom0, 
if I use "top" the CPU is 99% idle.  But if a run "xm top" this is where I see 
the 100% utilization on dom0.

> I think that bridging is definitely much lighter on CPU usage than
> routing, so I don't think that going down that path would help you in
> any way.

I agree in principle, I just didn't know what the Xen internals looked like so 
thought I would ask.

> What size are your packets? If they are VoIP then you are probably stuck
> with tiny packets, but if they are something else that can use big
> packets then you may be able to improve things by increasing your MTU up
> to 9000, although that is a road less travelled and may not be
> supported.

These are video packets, each packet has a 1316 byte UDP payload.  Changing the 
MTU upstream is not possible for me.
 
> But maybe Xen isn't the right solution to this problem?

No, I still think it is, we are having great success with Xen in our 
appliction, except for this passing of traffic.  Until we find the answer, 
we'll just have to use PCI-Passthrough and dedicate some NICs to the domU that 
needs the high-bandwidth.

Thanks again, I look forward to any more insights or ideas from the community.

--Mike


_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.