[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [MirageOS-devel] performance regression tests





On Sun, Feb 15, 2015 at 11:52 AM, Richard Mortier <richard.mortier@xxxxxxxxxxxx> wrote:
On 15 February 2015 at 11:28, Anil Madhavapeddy <anil@xxxxxxxxxx> wrote:
> On 15 Feb 2015, at 11:19, Richard Mortier <richard.mortier@xxxxxxxxxxxx> wrote:
>>
>> On 15 February 2015 at 11:15, Anil Madhavapeddy <anil@xxxxxxxxxx> wrote:
>>> This is great to see -- thanks for working on this Masoud!
>>>
>>> In particular, having even a simple iperf test would let us test
>>> several interesting combinations straight away:
>>>
>>> - vary OCaml version (4.01 vs 4.02 is now supported), and there
>>>Â is an experimental inlining branch in 4.03dev that could directly
>>>Â be tested using this infrastructure.
>>
>> ok-- so i guess that needs a switch/env var to specify the `opam switch` to use?
>
> Yeah. Although from the performance scripts' perspective, it's better
> if they just assume that there is a working OPAM environment. It would
> be easier to control these parameters from outside, and keep the perf
> scripts as easy to run as possible.

ah-- original thinking was to make `mir-perf.sh` a harness that might
be fired by `git-bisect`.

if it's just to be a script to run a single experiment (parameter set)
in a pre-configured environment, then that's a lot simpler.

> A couple of other things that might help:
>
> - Luke Dunstan has a rather comprehensive acceptance test suite for
>Â ÂMDNS: https://github.com/infidel/ocaml-mdns/tree/master/lib_test/acceptance
>
> - OCamlPro has a benchmarking system for core OCaml here that may have
>Â Âsome useful libraries: https://github.com/OCamlPro/operf-macro
>
> - Performance tests could be wrapped using Core_bench, which
>Â Âdoes linear regression across runs.
>Â Âhttps://realworldocaml.org/v1/en/html/understanding-the-garbage-collector.html#the-mutable-write-barrier
>

cool, ta.

>>> - evaluate the impact of some features incoming such as the open
>>>Â RFC for checksum offload.
>>
>> how are they specified -- as a PR?
>> in which case, masoud-- i guess that
>> https://help.github.com/articles/checking-out-pull-requests-locally/
>> is a starting point for how to specify a particular PR rather than
>> simply a commit rev.
>
> Yes, although again this would be better done outside the performance
> harness as an OPAM pin for the local environment. Just having the
> ability to quickly run a performance test would be invaluable at this
> stage.

so the caller of the script will

+ select their opam switch
+ pin any libraries, whether to PRs or particular commit-revs
+ record the environment configuration

+ then run the script, specifying the test to run, which will
 + start the unikernel-under-test
 + start any testing-unikernels (eg., iperf-client, iperf-server)
 + collect and record test results

...right?

ultimately i was thinking to have the recorded test results put in a
repo somewhere too, so that they could be visualised, compared to a
benchmark, etc.

>> just to be clear -- you mean does iperf via unikernels, ie., all the
>> tests that are executed should also be unikernels rather than standard
>> tools so that we don't take dependencies on an underlying platform
>> like the dom0?
>
> Yes -- Xen unikernels would be the primary target.

cool. less shell hacking, more unikernel hacking :)

> Starting and stopping VMs in open source Xen can be bit of a pain, so
> it would be ok if the test harness used XenServer (which the ARM SDcard
> images now include). Jon or Dave could comment on the state of the XMLRPC
> OCaml bindings to XenServer...

that would be useful. is it easy to get xenserver installed on x86 as
well? (that's the platform masoud is mostly working on.)

x86 as well? What is this, 2014? :-) Your options are:

1. install the latest 6.5 release of xenserver on bare metal

Pros: this is well tested and will work very well as a virtualisation platform
Cons: it takes over a whole machine. To do unikernel development you'll need to install a dev VM i.e. you can't do it in dom0. It's good practice to leave dom0 alone anyway. Think of it like a black box which can run VMs.

2. build the latest version from source and install on a CentOS 6/7 or maybe Ubuntu box

Pros: you can do whatever you want with dom0
Cons: some aspect of running VMs/ managing networks / managing storage is bound to not work and you'll need to debug it. There is little automated testing of this configuration, I (also Euan, Jon) try to fix bugs when I/we encounter them on a best-effort basis.

If you want to focus on unikernel dev and test, I recommend option 1. If you want a new hobby of debugging VM hosting software, then try option 2.

In the long run ... we'll converge option 1 and option 2... but I wouldn't wait.

Cheers,
Dave
Â

>> cool... (one day i must learn about the cambridge infrastructure machines :)
>
> Be careful what you wish for :)

now i'm genuinely curious... :)

--
Richard Mortier
richard.mortier@xxxxxxxxxxxx

_______________________________________________
MirageOS-devel mailing list
MirageOS-devel@xxxxxxxxxxxxxxxxxxxx
http://lists.xenproject.org/cgi-bin/mailman/listinfo/mirageos-devel



--
Dave Scott
_______________________________________________
MirageOS-devel mailing list
MirageOS-devel@xxxxxxxxxxxxxxxxxxxx
http://lists.xenproject.org/cgi-bin/mailman/listinfo/mirageos-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.