[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Poor network performance between DomU with multiqueue support



> -----Original Message-----
> From: xen-devel-bounces@xxxxxxxxxxxxx
> [mailto:xen-devel-bounces@xxxxxxxxxxxxx] On Behalf Of Wei Liu
> Sent: Tuesday, December 02, 2014 7:02 PM
> To: zhangleiqiang
> Cc: wei.liu2@xxxxxxxxxx; xen-devel@xxxxxxxxxxxxx
> Subject: Re: [Xen-devel] Poor network performance between DomU with
> multiqueue support
> 
> On Tue, Dec 02, 2014 at 04:30:49PM +0800, zhangleiqiang wrote:
> > Hi, all
> >     I am testing the performance of xen netfront-netback driver that with
> multi-queues support. The throughput from domU to remote dom0 is 9.2Gb/s,
> but the throughput from domU to remote domU is only 3.6Gb/s, I think the
> bottleneck is the throughput from dom0 to local domU. However, we have
> done some testing and found the throughput from dom0 to local domU is
> 5.8Gb/s.
> >     And if we send packets from one DomU to other 3 DomUs on different
> host simultaneously, the sum of throughout can reach 9Gbps. It seems like the
> bottleneck is the receiver?
> >     After some analysis, I found that even the max_queue of netfront/back
> is set to 4, there are some strange results as follows:
> >     1. In domU, only one rx queue deal with softirq
> 
> Try to bind irq to different vcpus?

Do you mean we try to bind irq to different vcpus in DomU? I will try it now.

> 
> >     2. In dom0, only two netback queues process are scheduled, other two
> process aren't scheduled.
> 
> How many Dom0 vcpu do you have? If it only has two then there will only be
> two processes running at a time.

Dom0 has 6 vcpus, and 6G memory. There are only one DomU running in Dom0 and so 
four netback processes are running in Dom0 (because the max_queue param of 
netback kernel module is set to 4). 
The phenomenon is that only 2 of these four netback process were running with 
about 70% cpu usage, and another two use little CPU.
Is there a hash algorithm to determine which netback process to handle the 
input packet?

> >
> >     Are there any issues in my test? In theory, can we achieve 9~10Gb/s
> between DomUs on different hosts using netfront/netback?
> >
> >      The testing environment details are as follows:
> >    1. Hardware
> >        a. CPU: Intel(R) Xeon(R) CPU E5645 @ 2.40GHz, 2 CPU 6 Cores with
> Hyper Thread enabled
> >        b. NIC: Intel Corporation 82599EB 10-Gigabit SFI/SFP+ Network
> Connection (rev 01)
> >    2. Sofware:
> >        a. HostOS: SLES 12 (Kernel 3.16-7,git commit
> > d0335e4feea0d3f7a8af3116c5dc166239da7521 )
> 
> And this is a SuSE kernel?

No, I just compile Dom0 and DomU kernel using 3.16-7 tag from kernel.org.

> >        b. NIC Driver: IXGBE 3.21.2
> >        c. OVS: 2.1.3
> >        d. MTU: 1600
> >        e. Dom0ï6U6G
> >        f. queue number: 4
> >        g. xen 4.4
> >        h. DomU: 4U4G
> >    3. Networking Environment:
> >        a. All network flows are transmit/receive through OVS
> >        b. Sender server and receiver server are connected directly between
> 10GE NIC
> >    4. Testing Tools:
> >        a. Sender: netperf
> >        b. Receiver: netserver
> >
> >
> > ----------
> > zhangleiqiang (Trump)
> > Best Regards
> 
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@xxxxxxxxxxxxx
> > http://lists.xen.org/xen-devel
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxx
> http://lists.xen.org/xen-devel
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.