[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Mirari, the tool you've all been waiting for!

On 10 Feb 2013, at 19:48, Richard Mortier <Richard.Mortier@xxxxxxxxxxxxxxxx> 
>> * run: this is yet to be implemented, and the reason for this mail.  Running 
>> a Mirage application is quite stateful: in the case of Xen, we want to 
>> monitor the kernel, attach a console, and so on.  Similarly for UNIX, one 
>> can imagine the socket version opening up a control channel for Mirari to 
>> listen on.  An in the kFreeBSD backend, this would be done via ioctls or 
>> other kernel/user interfaces.
> one very quick request - don't wait to get it documented and released. i 
> think it will still be *significantly* useful in terms of helping build 
> community (excuse the pun) and getting others engaged before "mirari run" 
> works. arguably for asplos the main things that need to work are 
> understanding how to configure, build and run unix and xen targets (bonus 
> marks for "unix socket" vs "unix mirage" or whatever we call them now) -- and 
> it's the understanding and building bits that require a lot of state and 
> runes at the moment. once i've got foo.xen or foo.native, i can run it 
> without too much trouble.

Yep; right now it builds Xen kernels with filesystem support, which leaves us 
where we were with OASIS, but with just a single configuration file required 
rather than all the rest.

Talking about the run target is really about post-ASPLOS work, which is all 
about coordinating and managing multiple unikernel instances (what could we 
call that... a 'multikernel' perhaps? Ahem).

>> So I'm going to extend Mirari to add `run` support which is stateful.  Each 
>> run will give the application a unique ID, stored locally, and enough 
>> information to poll the particular instance.  Every deployment has a 
>> different way to track it (Amazon EC2 vs XM vs Xenopsd are all completely 
>> different).
> this could presumably be extended to include the perf test/experimental 
> harness that the asplos paper was crying out for?

Test and experimental harnesses are basically just build variants, since they 
encode a policy of how the libraries should be glued together (with traffic 
generators and so on).  I think we can get very far with just loopback traffic 
generation (that is, functions calling functions rather than any wire traffic).

Balraj did one of these in mirage-skeleton/tcp, which is a Mirage kernel that 
spins up TCP iperf across a local bridge.  Perfect for testing TCP performance 
with no external dependencies.




Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.