[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] MPI benchmark performance gap between native linux anddomU



Jose,

Thank you so much for your valueable information.

  I guess I overlooked the rates you reported in your post.
  Looking now at your rates carefully I got somewhat confused. When you
say MB/sec do you mean Megabyte/sec or Migabit/sec?

It is Megabyte/sec (2^20 bytes)

In any case these
are much lower rates than in our case (we were using a gigabit network).
Now, I am starting to think that your problem might be different than
ours, but it does not hurt to try changing the advertised window, just
in case.

I will try your suggestion and sent out update tomorrow morning.

  Also, the numbers your report are inconsistent. You mention that your
network is 10 MB/s, and that native linux achieve 14.9 MB/s. How is it
possible to achieve a throughput higher than the network bandwidth?
Could you please clarify?

Yes, it is a little confusing. It is due to the caculation of SendRecv's throughput. If you take a look at the PMB user manual (following the link in my previous email), the throughput is defined as 2X/1.048567/time (time is latency). So, it is a weighted throughput and could go beyond 10MB/s, which is the max bandwidth of the network.

Thanks.

Xuehai

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.