[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-users] xen the i / O performance, network performance is only 20-30% of the real machine?



First,sorry my poor english~
Here is this test:

Virtualization performance comparison test

Test environment
Physical machine:
Cpu 8-core
8G memory
HDD: 147G

xen virtual machine:
cpu 2 core
4G memory
30G hard drive

wmware virtual machine:
cpu 2 core
4G memory
30G hard drive


Optical disk array (san)
Size: 7.7T
Speed: 6G/sec

Testing and structural
I / 0 performance test
Test methods
Test performance by dd, the script is as follows:

#! / Bin / bash
# Mnt
echo "/ mnt"
echo "dd if = / dev / zero of = / mnt/test0.date bs = 100M count = 50"
dd if = / dev / zero of = / mnt/test0.date bs = 100M count = 50
rm-rf / mnt/test0.date
echo "dd if = / dev / zero of = / mnt/test1.date bs = 10M count = 500"
dd if = / dev / zero of = / mnt/test1.date bs = 10M count = 500
rm-rf / mnt/test1.date
echo "dd if = / dev / zero of = / mnt/test2.date bs = 1024 count = 5000000"
dd if = / dev / zero of = / mnt/test2.date bs = 1024 count = 5000000
rm-rf / mnt/test2.date

# /
echo "/"
echo "dd if = / dev / zero of = / test0.date bs = 100M count = 50"
dd if = / dev / zero of = / test0.date bs = 100M count = 50
rm-rf / test0.date
echo "dd if = / dev / zero of = / test1.date bs = 10M count = 500"
dd if = / dev / zero of = / test1.date bs = 10M count = 500
rm-rf / test1.date
echo "dd if = / dev / zero of = / test2.date bs = 1024 count = 5000000"
dd if = / dev / zero of = / test2.date bs = 1024 count = 5000000
rm-rf / test2.date

Respectively, the real physical machines, virtual machines, test / directory and the / mnt directory (directly mounted optical disk array) for the test.

Test results

Catalog record read into the record bs count read out time (xen) MB / time (xen) time (local) MB / time (local)
/ Mnt 100M 50 50 +0 50 +0 39.7209 132 19.4492 270
/ Mnt 10M 500 500 +0 500 +0 44.5654 118 20.3288 258
/ Mnt 1024bytes 5000000 5000000 +0 5000000 +0 43.7605 117 42.1754 121
/ 100M 50 50 +0 50 +0 159.142 32.9 25.1047 209
/ 10M 500 500 +0 500 +0 183.316 28.6 28.3515 185
/ 1024bytes 5000000 5000000 +0 5000000 +0 175.724 29.1 36.3496 141





Network Performance Testing
SCP performance test

Test methods

Large files via scp (2G or more, such as RHEL system ISO file) to test their network performance.

Testing machine with a real machine, the virtual machine must be connected to Gigabit Ethernet, with the following command:
[Root @ rhel-PowerEdge-1 ~] # ethtool eth0
Settings for eth0:
        
Supported ports: [TP]
        
Supported link modes: 10baseT/Half 10baseT/Full
                                
100baseT/Half 100baseT/Full
                                
1000baseT/Full
        
Supports auto-negotiation: Yes
        
Advertised link modes: 10baseT/Half 10baseT/Full
                                
100baseT/Half 100baseT/Full
                                
1000baseT/Full
        
Advertised pause frame use: No
        
Advertised auto-negotiation: Yes
        
Speed: 1000Mb / s
        
Duplex: Full
        
Port: Twisted Pair
        
PHYAD: 1
        
Transceiver: internal
        
Auto-negotiation: on
        
MDI-X: Unknown
        
Supports Wake-on: g
        
Wake-on: d
        
Link detected: yes
Speed: 1000Mb / s that is Gigabit.


Test results
Note: wmware to workstation 7.1, for reference, the same below.
xen real machine wmware xen / machine ratio of real
scp download speed (M / s) 11.3 33.2 27.05 34.04%
scp upload speed (M / s) 12.1 28.3 26.2 42.76%


Netperf

Test methods
Prepare another machine A, the test machine B, C and A virtual machine to connect directly to Gigabit Ethernet (ibid.).
A, B, virtual machines are installed netperf-2.4.5-1.ky3.x86_64.rpm package.

The machine being tested (such as B, or virtual machine) running the server side:
[Root @ rhel-PowerEdge-1 ~] # netserver
Starting netserver at port 12865
Starting netserver at hostname 0.0.0.0 port 12865 and family AF_UNSPEC

A machine to modify the test client script (/ usr / local / sbin / netclient.sh), reads as follows:
#! / Bin / sh
SERVERIP = $ 1
OUT = $ 2

if ["$ SERVERIP" == ""-o "$ OUT" == ""]; then
echo "netclient <Server IP> <OUTPUT FILE>"
exit 1
fi

netperf-H $ SERVERIP-i 10,2-I 99,5 --m 4096-s 128K-S 128K> $ OUT
netperf-H $ SERVERIP-i 10,2-I 99,5 --m 4096-s 57344-S 57344>> $ OUT
netperf-H $ SERVERIP-t TCP_CRR-r 32,1024>> $ OUT



Run the test script:
[Root @ rhel-PowerEdge-1 ~] # sh / usr / local / sbin / netclient.sh other host or virtual machine ip test results. Log


Test results



xen real machine wmware xen / machine ratio of real
Network throughput 1 (less than the packet buffer) (10 ^ 6bit/sec) 139.16 820.64 519 16.96%
Network throughput 2 (when the cache is greater than the packet) (10 ^ 6bit/sec) 151.97 819.78 485.19 18.54%
A second new TCP connection (times / s) 763.83 2508 .85 1357.3 30.45%

Note: These are average.








Network File System Performance Test
nfs-io test (dd test)

Test methods
Host C mount SAN disk array, and provide NFS services, and host A (and the virtual machine) Gigabit Ethernet connection.
A (or virtual machine) mounted on / mnt, then the script dd.sh:
#! / Bin / bash
# Mnt
echo "/ mnt"
echo "dd if = / dev / zero of = / mnt/test0.date bs = 100M count = 50"
dd if = / dev / zero of = / mnt/test0.date bs = 100M count = 50
rm-rf / mnt/test0.date
echo "dd if = / dev / zero of = / mnt/test1.date bs = 10M count = 500"
dd if = / dev / zero of = / mnt/test1.date bs = 10M count = 500
rm-rf / mnt/test1.date
echo "dd if = / dev / zero of = / mnt/test2.date bs = 1024 count = 5000000"
dd if = / dev / zero of = / mnt/test2.date bs = 1024 count = 5000000
rm-rf / mnt/test2.date

# Sh dd.sh 2 &> test_nfs.log

Test results


performance than the real machine xen
Network io speed M / s 7.5 87.6 8.56%
Network io speed M / s 7.6 90.5 8.40%
Network io speed M / s 7.4 86.6 8.55%
Average 7.5 88.21 8.50%
_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.