[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [MirageOS-devel] performance regression tests

On 15 February 2015 at 11:28, Anil Madhavapeddy <anil@xxxxxxxxxx> wrote:
> On 15 Feb 2015, at 11:19, Richard Mortier <richard.mortier@xxxxxxxxxxxx> 
> wrote:
>> On 15 February 2015 at 11:15, Anil Madhavapeddy <anil@xxxxxxxxxx> wrote:
>>> This is great to see -- thanks for working on this Masoud!
>>> In particular, having even a simple iperf test would let us test
>>> several interesting combinations straight away:
>>> - vary OCaml version (4.01 vs 4.02 is now supported), and there
>>>  is an experimental inlining branch in 4.03dev that could directly
>>>  be tested using this infrastructure.
>> ok-- so i guess that needs a switch/env var to specify the `opam switch` to 
>> use?
> Yeah.  Although from the performance scripts' perspective, it's better
> if they just assume that there is a working OPAM environment.  It would
> be easier to control these parameters from outside, and keep the perf
> scripts as easy to run as possible.

ah-- original thinking was to make `mir-perf.sh` a harness that might
be fired by `git-bisect`.

if it's just to be a script to run a single experiment (parameter set)
in a pre-configured environment, then that's a lot simpler.

> A couple of other things that might help:
> - Luke Dunstan has a rather comprehensive acceptance test suite for
>   MDNS: https://github.com/infidel/ocaml-mdns/tree/master/lib_test/acceptance
> - OCamlPro has a benchmarking system for core OCaml here that may have
>   some useful libraries: https://github.com/OCamlPro/operf-macro
> - Performance tests could be wrapped using Core_bench, which
>   does linear regression across runs.
> https://realworldocaml.org/v1/en/html/understanding-the-garbage-collector.html#the-mutable-write-barrier

cool, ta.

>>> - evaluate the impact of some features incoming such as the open
>>>  RFC for checksum offload.
>> how are they specified -- as a PR?
>> in which case, masoud-- i guess that
>> https://help.github.com/articles/checking-out-pull-requests-locally/
>> is a starting point for how to specify a particular PR rather than
>> simply a commit rev.
> Yes, although again this would be better done outside the performance
> harness as an OPAM pin for the local environment.  Just having the
> ability to quickly run a performance test would be invaluable at this
> stage.

so the caller of the script will

+ select their opam switch
+ pin any libraries, whether to PRs or particular commit-revs
+ record the environment configuration

+ then run the script, specifying the test to run, which will
  + start the unikernel-under-test
  + start any testing-unikernels (eg., iperf-client, iperf-server)
  + collect and record test results


ultimately i was thinking to have the recorded test results put in a
repo somewhere too, so that they could be visualised, compared to a
benchmark, etc.

>> just to be clear -- you mean does iperf via unikernels, ie., all the
>> tests that are executed should also be unikernels rather than standard
>> tools so that we don't take dependencies on an underlying platform
>> like the dom0?
> Yes -- Xen unikernels would be the primary target.

cool. less shell hacking, more unikernel hacking :)

> Starting and stopping VMs in open source Xen can be bit of a pain, so
> it would be ok if the test harness used XenServer (which the ARM SDcard
> images now include).  Jon or Dave could comment on the state of the XMLRPC
> OCaml bindings to XenServer...

that would be useful. is it easy to get xenserver installed on x86 as
well? (that's the platform masoud is mostly working on.)

>> cool... (one day i must learn about the cambridge infrastructure machines :)
> Be careful what you wish for :)

now i'm genuinely curious... :)

Richard Mortier

MirageOS-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.