[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [Xen-devel] I/O descriptor ring size bottleneck?


> I'm doing some networking experiments over high BDP topologies. Right
> now the configuration is quite simple -- two Xen boxes connected via a
> dummynet router. The dummynet router is set to limit bandwidth to
> 500Mbps and simulate an RTT of 80ms.

> Now if I run 50 netperf flows lasting 80 seconds (1000RTTs) from
> inside  a VM on one box talking to the netserver on the VM on the
> other box, I get a per flow throughput of around ~2.5Mbps (which
> sucks, but lets ignore the absolute value for the moment).
> If I run the same test, but this time from inside dom0, I get a per
> flow throughput of around 6Mbps.
> I'm trying to understand the difference in performance. It seems to me
> that the I/O descriptor ring sizes are hard coded to 256 -- could that
> be a bottleneck here? If not, have people experience similar problems?

Interesting. I'm not aware of any high BDP testing, and I'm slightly
surprised that its causing a problem (low latency situations are more of
a challenge for virtual networking).

The ring size really shouldn't be an issue for this, as it just has the
effect of reducing the number of context switches between dom0 and the

BTW, I'd actually be very suspicious of dummynet's ability to operate at
500Mb/s. It's possible that the reduced bandwidth is due to some bad
interaction between burstiness caused by Xen's context switching and

Are your dom0 and domU running on the same processor? Could you try
using hyperthreading or SMP?

Have you checked that domU <-> domU performance is good on the LAN with
a single TCP connection?


SF email is sponsored by - The IT Product Guide
Read honest & candid reviews on hundreds of IT Products from real users.
Discover which products truly live up to the hype. Start reading now.
Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.