[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Bidirectional network throughput for netback



On Tue, Jul 30, 2013 at 9:39 AM, Wei Liu <wei.liu2@xxxxxxxxxx> wrote:
> On Tue, Jul 30, 2013 at 02:39:50AM -0400, Shakeel Butt wrote:
>> Hi,
>>
>> Is there any limitation on the bidirectional throughput of netback
>> driver? Measuring the inward and outward traffic throughput of a DomU
>> in parallel effects both flows. Let me explain my experiment setup for
>> better understanding.
>>
>> In my experiments I have three machines i.e. A, B and C, equipped with
>> Xen 4.3 and Linux 3.6.11 and configured with 2vcpus & 2GB RAM. I am
>> running DomU on machine 'B' having same configuration(3.6.11, 2vcpus &
>> 2 GB). I am measuring the network throughput between DomU and machine
>> 'A' and 'C' (or Dom0 on those machines). The traffic is generated from
>> A to DomU and from DomU to C. If I generate only one traffic flow at a
>> time, I get similar throughput for both flows, lets say X Mbps but if
>> I generate traffic in parallel, X is divided in between these two
>> flows. Now in the experiment if I change DomU to Dom0 of 'B'
>> (basically no netback usage in B) I get X throughput for both traffic
>> flows even if I generate traffic in parallel. So, it seems to me that
>> 'netback' for DomU is the bottleneck. Is this right? If so what design
>> choice in netback is causing this issue?
>>
>
> Currently there is only one kthread in netback for DomU's transmit and
> receive queues, so it is normal to see the throughput drops to half.

Thanks. Is there one kthread for each netback/netfront pair or is there one
kthread for each domain?

Shakeel

>
>
> Wei.
>
>> thanks,
>> Shakeel
>>
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@xxxxxxxxxxxxx
>> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.