[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [MirageOS-devel] Network throughput and latency measurement on MirageOS



Many thanks for your comments!

Sure. Testing UDP could be helpful as you mentioned.
I have just finished modification of the iperf source code so that it can run 
with the UDP stack, and started performance evaluation again on my Xen physical 
server.

Kind regards,

--
Takayuki Imada

On 1/11/17 11:13 AM, Christiano F. Haesbaert wrote:
That's great work !

Did you consider testing UDP too ?

This could help isolating the bottleneck(s) as the sending side is
independent of the receiving side, no ACKs, no latency dependencies,
much much less things to reason about than the TCP machine.

Also getting a packet per second count vs size would be possible with
UDP, a number of how much PPS the UDP stack can produce, and how many
it can receive, could be used as a ceiling to reason about TCP, since
the UDP stack is pretty much nothing. For instance, if you find the
ceiling of sending UDP@1400 to be about 40kpps, chances are you'll
never reach ~500MB/s in TCP.

One step further would be to test raw datagrams, bypassing the stack,
which would test only the xen/device io part.

I had some considerable success in optimizing a BSD network stack with
a similar approach, so here lies my two cents :-).

Thanks again for your work.

On 11 January 2017 at 11:47, Takayuki Imada <takayuki.imada@xxxxxxxxx> wrote:
Hi Mindy,

I have finished the first comparison, MirageOS VMs vs Linux VMs and found
the throughput performance on MirageOS VMs with a larger sender buffer size
is quite lower than my expectation.
- Configuration
http://www.cl.cam.ac.uk/~ti259/temp/config.pdf
- Throughput (64 - 2048 Bytes sender buffer size)
http://www.cl.cam.ac.uk/~ti259/temp/throughput.pdf
- Latency (TCP round-trip pingpong with a 1-Byte payload)
http://www.cl.cam.ac.uk/~ti259/temp/latency.pdf

I am now investigating what is a bottleneck by using the xentrace/xentop
commands and the mirage-trace-viewer. I have also found the followings;
  - the vCPU utilization in the receiver side was reaching 100%
  - the sender side and Dom0 do not seem to be a bottleneck
    -> http://www.cl.cam.ac.uk/~ti259/temp/cpu.pdf

I will let you know when I have further findings.

Kind regards,

--
Takayuki Imada


On 1/10/17 6:30 PM, Mindy wrote:

Hi Takayuki,

Thanks for posting!  Do you have any comments on your findings with these
tools so far?  I'm particularly interested in performance regressions from
the serialization/deserialization changes in tcpip's current master.

-Mindy


_______________________________________________
MirageOS-devel mailing list
MirageOS-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/cgi-bin/mailman/listinfo/mirageos-devel


_______________________________________________
MirageOS-devel mailing list
MirageOS-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/cgi-bin/mailman/listinfo/mirageos-devel

_______________________________________________
MirageOS-devel mailing list
MirageOS-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/cgi-bin/mailman/listinfo/mirageos-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.