[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-users] Re: network performance drop heavily in xen 4.0 release


  • To: xen-users@xxxxxxxxxxxxxxxxxxx
  • From: yingbin wang <yingbin.wangyb@xxxxxxxxx>
  • Date: Fri, 16 Apr 2010 16:48:55 +0800
  • Delivery-date: Fri, 16 Apr 2010 01:50:05 -0700
  • Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type:content-transfer-encoding; b=AbqdO1iVdHO+R63gfTHpXtQGLUkpipIstVJgECDUE8i0bdOv15iIp29AVPDBtIvUrj aICxBtMElBk05vt+tMVgErMs6F5uhT5OwheB8VHcAXD3PFBx4yg+7+dwMysQzIucLQY3 27ew+bipfDqAciULJrt732bDKrmYqDXlB3wq0=
  • List-id: Xen user discussion <xen-users.lists.xensource.com>

the problem is solved.

we closed most of the debug config options. a miracle happened. the
performance returned to the level before.
we compared the .config of 2.6.18.8 with 2.6.31.13. the differences
are the debug options.
I think the default .config in 2.6.31.13 should close the debug
options or provide a way to turn off.

thanks all

Cheers,
wyb

2010/4/16 yingbin wang <yingbin.wangyb@xxxxxxxxx>:
> Hi:
>     I report a Bug !!!  We have just upgraded to
> xen4.0+kernel2.6.31.13 recently.  however , fond that the network
> performance drop heavily in dom0  (nearly Reduced by 2/3 vs
> xen3.4.2+kernel2.6.18.8 ) .
>
> our env :
> hardware :
>   Intel(R) Xeon(R) CPU           E5520  @ 2.27GHz
>   01:00.0 Ethernet controller: Broadcom Corporation NetXtreme II
> BCM5709 Gigabit Ethernet (rev 20)
>   01:00.1 Ethernet controller: Broadcom Corporation NetXtreme II
> BCM5709 Gigabit Ethernet (rev 20)
> compile env and filesystem :
>   Redhat AS 5.4
>
> xm info :
> -----------------------------------------------------------------
> host                   : r02k08015
> release                : 2.6.31.13xen
> version                : #1 SMP Tue Apr 13 20:38:51 CST 2010
> machine                : x86_64
> nr_cpus                : 16
> nr_nodes               : 2
> cores_per_socket       : 4
> threads_per_core       : 2
> cpu_mhz                : 2266
> hw_caps                :
> bfebfbff:28100800:00000000:00001b40:009ce3bd:00000000:00000001:00000000
> virt_caps              : hvm
> total_memory           : 24539
> free_memory            : 15596
> node_to_cpu            : node0:0,2,4,6,8,10,12,14
>                         node1:1,3,5,7,9,11,13,15
> node_to_memory         : node0:3589
>                         node1:12007
> node_to_dma32_mem      : node0:2584
>                         node1:0
> max_node_id            : 1
> xen_major              : 4
> xen_minor              : 0
> xen_extra              : .0
> xen_caps               : xen-3.0-x86_64 xen-3.0-x86_32p hvm-3.0-x86_32
> hvm-3.0-x86_32p hvm-3.0-x86_64
> xen_scheduler          : credit
> xen_pagesize           : 4096
> platform_params        : virt_start=0xffff800000000000
> xen_changeset          : unavailable
> xen_commandline        : dom0_mem=10240M
> cc_compiler            : gcc version 4.1.2 20080704 (Red Hat 4.1.2-46)
> cc_compile_by          : root
> cc_compile_domain      :
> cc_compile_date        : Tue Apr 13 23:04:16 CST 2010
> xend_config_format     : 4
> ---------------------------------------------------------------------------
>
>
> test tool:  iperf-2.0.4
> command:
> root@xxxxxxxxxxx :      iperf -s
> root@xxxxxxxxxxx :      iperf -c 10.250.6.25 -i 1 -t 100
>
> network performance:
>
> xen4.0+kernel2.6.31.13:
> [ ID] Interval       Transfer     Bandwidth
> [  4]  0.0- 9.5 sec    249 MBytes    219 Mbits/sec
>
> xen3.4.2+kernel2.6.18.8:
> [ ID] Interval       Transfer     Bandwidth
> [  4]  0.0-15.0 sec  1.64 GBytes    941 Mbits/sec
>
> BTW ,1 the disk IO performance also reduce from 90MB/s to 60MB/s.
>         2 the attachment is the dom0 kernel compile config.
>
> Cheers,
> wyb
>

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.