[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] TSQ accounting skb->truesize degrades throughput for large packets



On 09/07/2013 12:56 AM, Eric Dumazet wrote:
> On Fri, 2013-09-06 at 17:36 +0100, Zoltan Kiss wrote:
>> On 06/09/13 13:57, Eric Dumazet wrote:
>>> Well, I have no problem to get line rate on 20Gb with a single flow, so
>>> other drivers have no problem.
>> I've made some tests on bare metal:
>> Dell PE R815, Intel 82599EB 10Gb, 3.11-rc4 32 bit kernel with 3.17.3 
>> ixgbe (TSO, GSO on), iperf 2.0.5
>> Transmitting packets toward the remote end (so running iperf -c on this 
>> host) can make 8.3 Gbps with the default 128k tcp_limit_output_bytes. 
>> When I increased this to 131.506 (128k + 434 bytes) suddenly it jumped 
>> to 9.4 Gbps. Iperf CPU usage also jumped a few percent from ~36 to ~40% 
>> (softint percentage in top also increased from ~3 to ~5%)
> Typical tradeoff between latency and throughput
>
> If you favor throughput, then you can increase tcp_limit_output_bytes
>
> The default is quite reasonable IMHO.
>
>> So I guess it would be good to revisit the default value of this 
>> setting. What hw you used Eric for your 20Gb results?
> Mellanox CX-3
>
> Make sure your NIC doesn't hold TX packets in TX ring too long before
> signaling an interrupt for TX completion.

Virtio-net orphan the skb in .ndo_start_xmit() so TSQ can not throttle
packets in device accurately, and it also can't do BQL. Does this means
TSQ should be disabled for virtio-net?


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.