[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] I/O descriptor ring size bottleneck?

Diwaker Gupta wrote:

Hi everyone,

I'm doing some networking experiments over high BDP topologies. Right
now the configuration is quite simple -- two Xen boxes connected via a
dummynet router. The dummynet router is set to limit bandwidth to
500Mbps and simulate an RTT of 80ms.

I'm using the following sysctl values:
net.ipv4.tcp_rmem = 4096        87380   4194304
net.ipv4.tcp_wmem = 4096        65536   4194304

If you're trying to tune TCP traffic, then you might
want to increase the default TCP socket size (87380) above
as well, as simply increasing the core size won't
help there.

Now if I run 50 netperf flows lasting 80 seconds (1000RTTs) from
inside  a VM on one box talking to the netserver on the VM on the
other box, I get a per flow throughput of around ~2.5Mbps (which
sucks, but lets ignore the absolute value for the moment).

If I run the same test, but this time from inside dom0, I get a per
flow throughput of around 6Mbps.

Could you get any further information on your test/data?
Which netperf test were you running, btw?

I'm trying to understand the difference in performance. It seems to me
that the I/O descriptor ring sizes are hard coded to 256 -- could that
be a bottleneck here? If not, have people experience similar problems?

Someone on this list had posted that they would be getting
oprofile working soon - you might want to retry your testing
with that patch.


SF email is sponsored by - The IT Product Guide
Read honest & candid reviews on hundreds of IT Products from real users.
Discover which products truly live up to the hype. Start reading now.
Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.