[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [MirageOS-devel] performance regression tests



Hi all,

We have now created `mirage-perf` repository that provides a simple Mirage performance regression test on XenServer:

https://github.com/mirage/mirage-perf

This version is much simpler than previous one we announced (https://github.com/koleini/mirage-perf), and so far uses one single unikernel--based on Balraj's iperf--that sends tcp traffic from one interface to the second one at highest possible rate, and retrieves statistics.

The previous version of mirage-perf runs on Xen Dom0, creates a linux-based vm (traffic generator) and a Mirage unikernel, and installs off-the-shelf tools to generate and monitor traffic focusing on customised performance regression testing of mirage-net-xen.

Please send us any feedback or suggestion for the tests that you thing would be useful to add.

Thanks.

On 15/02/15 12:50, Richard Mortier wrote:
yes, it's still the 2014/15 academic year :p

i suspect i know the answer to this but just in case: i presume by
"bare metal" you really do mean bare metal? ie., inside a virtualbox
is not going to work?

masoud-- if that's the case, i'll pick this up with you offline.
there's a server in nottingham that you should be able to get access
to and do what you want with. i originally put xen on it by hand and
used the xm/xl tools, but it sounds like it might be time to wipe it
and start again with xenserver.


On 15 February 2015 at 12:18, David Scott <scott.dj@xxxxxxxxx> wrote:

On Sun, Feb 15, 2015 at 11:52 AM, Richard Mortier
<richard.mortier@xxxxxxxxxxxx> wrote:
On 15 February 2015 at 11:28, Anil Madhavapeddy <anil@xxxxxxxxxx> wrote:
On 15 Feb 2015, at 11:19, Richard Mortier <richard.mortier@xxxxxxxxxxxx>
wrote:
On 15 February 2015 at 11:15, Anil Madhavapeddy <anil@xxxxxxxxxx>
wrote:
This is great to see -- thanks for working on this Masoud!

In particular, having even a simple iperf test would let us test
several interesting combinations straight away:

- vary OCaml version (4.01 vs 4.02 is now supported), and there
  is an experimental inlining branch in 4.03dev that could directly
  be tested using this infrastructure.
ok-- so i guess that needs a switch/env var to specify the `opam
switch` to use?
Yeah.  Although from the performance scripts' perspective, it's better
if they just assume that there is a working OPAM environment.  It would
be easier to control these parameters from outside, and keep the perf
scripts as easy to run as possible.
ah-- original thinking was to make `mir-perf.sh` a harness that might
be fired by `git-bisect`.

if it's just to be a script to run a single experiment (parameter set)
in a pre-configured environment, then that's a lot simpler.

A couple of other things that might help:

- Luke Dunstan has a rather comprehensive acceptance test suite for
   MDNS:
https://github.com/infidel/ocaml-mdns/tree/master/lib_test/acceptance

- OCamlPro has a benchmarking system for core OCaml here that may have
   some useful libraries: https://github.com/OCamlPro/operf-macro

- Performance tests could be wrapped using Core_bench, which
   does linear regression across runs.

https://realworldocaml.org/v1/en/html/understanding-the-garbage-collector.html#the-mutable-write-barrier

cool, ta.

- evaluate the impact of some features incoming such as the open
  RFC for checksum offload.
how are they specified -- as a PR?
in which case, masoud-- i guess that
https://help.github.com/articles/checking-out-pull-requests-locally/
is a starting point for how to specify a particular PR rather than
simply a commit rev.
Yes, although again this would be better done outside the performance
harness as an OPAM pin for the local environment.  Just having the
ability to quickly run a performance test would be invaluable at this
stage.
so the caller of the script will

+ select their opam switch
+ pin any libraries, whether to PRs or particular commit-revs
+ record the environment configuration

+ then run the script, specifying the test to run, which will
   + start the unikernel-under-test
   + start any testing-unikernels (eg., iperf-client, iperf-server)
   + collect and record test results

...right?

ultimately i was thinking to have the recorded test results put in a
repo somewhere too, so that they could be visualised, compared to a
benchmark, etc.

just to be clear -- you mean does iperf via unikernels, ie., all the
tests that are executed should also be unikernels rather than standard
tools so that we don't take dependencies on an underlying platform
like the dom0?
Yes -- Xen unikernels would be the primary target.
cool. less shell hacking, more unikernel hacking :)

Starting and stopping VMs in open source Xen can be bit of a pain, so
it would be ok if the test harness used XenServer (which the ARM SDcard
images now include).  Jon or Dave could comment on the state of the
XMLRPC
OCaml bindings to XenServer...
that would be useful. is it easy to get xenserver installed on x86 as
well? (that's the platform masoud is mostly working on.)

x86 as well? What is this, 2014? :-) Your options are:

1. install the latest 6.5 release of xenserver on bare metal

Pros: this is well tested and will work very well as a virtualisation
platform
Cons: it takes over a whole machine. To do unikernel development you'll need
to install a dev VM i.e. you can't do it in dom0. It's good practice to
leave dom0 alone anyway. Think of it like a black box which can run VMs.

2. build the latest version from source and install on a CentOS 6/7 or maybe
Ubuntu box

Pros: you can do whatever you want with dom0
Cons: some aspect of running VMs/ managing networks / managing storage is
bound to not work and you'll need to debug it. There is little automated
testing of this configuration, I (also Euan, Jon) try to fix bugs when I/we
encounter them on a best-effort basis.

If you want to focus on unikernel dev and test, I recommend option 1. If you
want a new hobby of debugging VM hosting software, then try option 2.

In the long run ... we'll converge option 1 and option 2... but I wouldn't
wait.

Cheers,
Dave


cool... (one day i must learn about the cambridge infrastructure
machines :)
Be careful what you wish for :)
now i'm genuinely curious... :)

--
Richard Mortier
richard.mortier@xxxxxxxxxxxx

_______________________________________________
MirageOS-devel mailing list
MirageOS-devel@xxxxxxxxxxxxxxxxxxxx
http://lists.xenproject.org/cgi-bin/mailman/listinfo/mirageos-devel



--
Dave Scott



--
Research Fellow
School of Computer Science
University of Nottingham
Nottingham
NG8 1BB





This message and any attachment are intended solely for the addressee
and may contain confidential information. If you have received this
message in error, please send it back to me, and immediately delete it.
Please do not use, copy or disclose the information contained in this
message or in any attachment.  Any views or opinions expressed by the
author of this email do not necessarily reflect the views of the
University of Nottingham.

This message has been checked for viruses but the contents of an
attachment may still contain software viruses which could damage your
computer system, you are advised to perform your own checks. Email
communications with the University of Nottingham may be monitored as
permitted by UK legislation.


_______________________________________________
MirageOS-devel mailing list
MirageOS-devel@xxxxxxxxxxxxxxxxxxxx
http://lists.xenproject.org/cgi-bin/mailman/listinfo/mirageos-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.