[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] Network performance - sending from VM to VM using TCP
Are you using FreeBSD or Linux? On Thu, 26 May 2005, Cherie Cheung wrote: > Hi, > > I have been simulating a network using dummynet and evaluating it > using netperf. Xen3.0-unstable is used and the VMs are > vmlinuz-2.6.11-xenU. The simulated link is 300Mbps with 80ms RTT. > Using netperf, I sent data using TCP from domain-0 of machine 1 to > domain-0 of machine 2. Then I repeat the experiment, but this time > from VM-1 of machine 1 to VM-1 of machine 2. > > However, the performance across the two VMs is substantially worse > than that across domain-0. Here's the result: > > FROM VM to VM: > TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to dw10.ucsd.edu > (172.19.222.210) port 0 AF_INET > Recv Send Send > Socket Socket Message Elapsed > Size Size Size Time Throughput > bytes bytes bytes secs. 10^6bits/sec > > 87380 65536 65536 80.28 24.83 > > > FROM domain-0 to domain-0: > TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to damp.ucsd.edu > (137.110.222.236) port 0 AF_INET > Recv Send Send > Socket Socket Message Elapsed > Size Size Size Time Throughput > bytes bytes bytes secs. 10^6bits/sec > > 87380 65536 65536 80.11 280.62 > > Here's the setting of the network buffer: > > net.core.wmem_max = 8388608 > net.core.rmem_max = 8388608 > net.ipv4.tcp_bic = 1 > net.ipv4.tcp_rmem = 4096 87380 8388608 > net.ipv4.tcp_wmem = 4096 65536 8388608 > > Does anyone know why the performance across two VMs is so bad? Any fix > to it? Thank you. > > Cherie > > _______________________________________________ > Xen-devel mailing list > Xen-devel@xxxxxxxxxxxxxxxxxxx > http://lists.xensource.com/xen-devel > -- "I will not be pushed, filed, stamped, indexed, briefed, debriefed or numbered. My life is my own." _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |