[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Re: network performance drop heavily in xen 4.0 release


  • To: yingbin wang <yingbin.wangyb@xxxxxxxxx>
  • From: "Ronaldo C. A. Chaves" <xarqui@xxxxxxxxx>
  • Date: Sat, 24 Apr 2010 11:20:31 -0300
  • Cc: xen-devel@xxxxxxxxxxxxxxxxxxx
  • Delivery-date: Sat, 24 Apr 2010 07:21:33 -0700
  • Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; b=IQ1sSVy0HQk5gdY1nmU/YoiQQu9UlJz/DfXW5Isl8loVA9CQBRBaSjz6QebTof8E4V fkOiiQOkXfixkJE6gUDYUkFnke8c4cWn6uTV5zy+7do7JROWiPrQrtlhzcHAuXWyUTQH /bP1zFFTjhCJXf1kjguDjxIVFtu/kayJQsnOI=
  • List-id: Xen developer discussion <xen-devel.lists.xensource.com>

I compared the files and the difference is

CONFIG_BLK_DEV_LOOP=m
CONFIG_ATA_PIIX=m
CONFIG_XEN_DEV_EVTCHN=y

in config-2.6.31.13-high performance.


2010/4/24 yingbin wang <yingbin.wangyb@xxxxxxxxx>
of course.
the attachment is the dom0 kernel compile config that fix the problem.
I don't know the exact config option which cause the problem, so I
don't test 2.6.18.8 with debug.you can compare it with the previous
config to find the differences.

Here are my test results:

test tool:  iperf-2.0.4
command:
root@xxxxxxxxxxx :      iperf -s
root@xxxxxxxxxxx :      iperf -c 10.250.6.25 -i 1 -t 100

network performance:

xen4.0+kernel2.6.31.13(with debug):
[ ID] Interval       Transfer     Bandwidth
[  4]  0.0- 9.5 sec    249 MBytes    219 Mbits/sec

xen4.0+kernel2.6.31.13(without debug):
[ ID] Interval       Transfer     Bandwidth
[  4]  0.0-100.0 sec  10.7 GBytes    920 Mbits/sec

xen3.4.2+kernel2.6.18.8(without debug):
[ ID] Interval       Transfer     Bandwidth
[  4]  0.0-15.0 sec  1.64 GBytes    941 Mbits/sec

xen4.0+kernel2.6.18.8(without debug):
[ ID] Interval       Transfer     Bandwidth
[  4]  0.0-16.3 sec  1.79 GBytes    941 Mbits/sec

Cheers,
wyb

2010/4/19 Pasi Kärkkäinen <pasik@xxxxxx>:
> On Fri, Apr 16, 2010 at 04:49:42PM +0800, yingbin wang wrote:
>> the problem is solved.
>>
>> we closed most of the debug config options. a miracle happened. the
>> performance returned to the level before.
>> we compared the .config of 2.6.18.8 with 2.6.31.13. the differences
>> are the debug options.
>> I think the default .config in 2.6.31.13 should close the debug
>> options or provide a way to turn off.
>>
>
> Could you please post the exact .config options you turned off to fix the problem?
> I can add that info to the wiki page.
>
> Also can you please post the performance numbers with 2.6.18.8 and
> pvops dom0 with and without debug? This would be interesting to know.
>
> Thanks!
>
> -- Pasi
>
>> thanks all
>>
>> Cheers,
>> wyb
>>
>> 2010/4/16 yingbin wang <yingbin.wangyb@xxxxxxxxx>:
>> > Hi:
>> >     I report a Bug !!!  We have just upgraded to
>> > xen4.0+kernel2.6.31.13 recently.  however , fond that the network
>> > performance drop heavily in dom0  (nearly Reduced by 2/3 vs
>> > xen3.4.2+kernel2.6.18.8 ) .
>> >
>> > our env :
>> > hardware :
>> >   Intel(R) Xeon(R) CPU           E5520  @ 2.27GHz
>> >   01:00.0 Ethernet controller: Broadcom Corporation NetXtreme II
>> > BCM5709 Gigabit Ethernet (rev 20)
>> >   01:00.1 Ethernet controller: Broadcom Corporation NetXtreme II
>> > BCM5709 Gigabit Ethernet (rev 20)
>> > compile env and filesystem :
>> >   Redhat AS 5.4
>> >
>> > xm info :
>> > -----------------------------------------------------------------
>> > host                   : r02k08015
>> > release                : 2.6.31.13xen
>> > version                : #1 SMP Tue Apr 13 20:38:51 CST 2010
>> > machine                : x86_64
>> > nr_cpus                : 16
>> > nr_nodes               : 2
>> > cores_per_socket       : 4
>> > threads_per_core       : 2
>> > cpu_mhz                : 2266
>> > hw_caps                :
>> > bfebfbff:28100800:00000000:00001b40:009ce3bd:00000000:00000001:00000000
>> > virt_caps              : hvm
>> > total_memory           : 24539
>> > free_memory            : 15596
>> > node_to_cpu            : node0:0,2,4,6,8,10,12,14
>> >                         node1:1,3,5,7,9,11,13,15
>> > node_to_memory         : node0:3589
>> >                         node1:12007
>> > node_to_dma32_mem      : node0:2584
>> >                         node1:0
>> > max_node_id            : 1
>> > xen_major              : 4
>> > xen_minor              : 0
>> > xen_extra              : .0
>> > xen_caps               : xen-3.0-x86_64 xen-3.0-x86_32p hvm-3.0-x86_32
>> > hvm-3.0-x86_32p hvm-3.0-x86_64
>> > xen_scheduler          : credit
>> > xen_pagesize           : 4096
>> > platform_params        : virt_start=0xffff800000000000
>> > xen_changeset          : unavailable
>> > xen_commandline        : dom0_mem=10240M
>> > cc_compiler            : gcc version 4.1.2 20080704 (Red Hat 4.1.2-46)
>> > cc_compile_by          : root
>> > cc_compile_domain      :
>> > cc_compile_date        : Tue Apr 13 23:04:16 CST 2010
>> > xend_config_format     : 4
>> > ---------------------------------------------------------------------------
>> >
>> >
>> > test tool:  iperf-2.0.4
>> > command:
>> > root@xxxxxxxxxxx :      iperf -s
>> > root@xxxxxxxxxxx :      iperf -c 10.250.6.25 -i 1 -t 100
>> >
>> > network performance:
>> >
>> > xen4.0+kernel2.6.31.13:
>> > [ ID] Interval       Transfer     Bandwidth
>> > [  4]  0.0- 9.5 sec    249 MBytes    219 Mbits/sec
>> >
>> > xen3.4.2+kernel2.6.18.8:
>> > [ ID] Interval       Transfer     Bandwidth
>> > [  4]  0.0-15.0 sec  1.64 GBytes    941 Mbits/sec
>> >
>> > BTW ,1 the disk IO performance also reduce from 90MB/s to 60MB/s.
>> >         2 the attachment is the dom0 kernel compile config.
>> >
>> > Cheers,
>> > wyb
>> >
>>
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@xxxxxxxxxxxxxxxxxxx
>> http://lists.xensource.com/xen-devel
>

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.