[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-users] Xen Performance
On Tue, Oct 13, 2009 at 5:06 AM, Fajar A. Nugraha <fajar@xxxxxxxxx> wrote:
CentOS 5.4? If my time machine worked :-) . I'm already running CentOS 5.3 with Gitco's Xen 3.4.1. Here's another system: CentOS 5.3 Dom0, CentOS 5.3 DomUs on a Dual Core Duo Xeon system (2.8ghz) DomU to DomU - 1.93 Gbits/sec DomU to Dom0 - 2.76 Gbit/sec Dom0 to DomU - 193 Mbits/sec A third system running CentOS 5.3 Dom0, Ubuntu 9.04 DomU with Debian Lenny xenified kernel and CentOS 5.3 DomU. Ghz Core2 Duo (2.2 Ghz) DomU to DomU - 2.89 Gbit/sec DomU to Dom0 - 4.4 Gbit/sec Dom0 to DomU - 257 Mbits/sec None of these summaries are really that accurate because if I do an iperf -c 192.168.0.100 -r the return speed is always in the toilet. Server listening on TCP port 5001 TCP window size: 85.3 KByte (default) ------------------------------------------------------------ [ 4] local 192.168.0.191 port 5001 connected with 192.168.0.196 port 57543 [ ID] Interval Transfer Bandwidth [ 4] 0.0-10.0 sec 3.38 GBytes 2.89 Gbits/sec ------------------------------------------------------------ Client connecting to 192.168.0.196, TCP port 5001 TCP window size: 16.0 KByte (default) ------------------------------------------------------------ [ 4] local 192.168.0.191 port 38701 connected with 192.168.0.196 port 5001 write2 failed: Broken pipe [ ID] Interval Transfer Bandwidth [ 4] 0.0- 0.0 sec 15.6 KBytes 343 Mbits/sec This is the behavior I observed almost 2 years ago and it still seems to be consistant. Fajar, if you could run these on your systems to see if you're seeing something different. The one thing that's always the same is that I'm using CentOS 5.3 as a Dom0. Grant McWilliams Some people, when confronted with a problem, think "I know, I'll use Windows." Now they have two problems. _______________________________________________ Xen-users mailing list Xen-users@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-users
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |