[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [MirageOS-devel] Some thoughts on operating unikernel environments

On 22 August 2015 at 12:34, Thomas Leonard <talex5@xxxxxxxxx> wrote:
> On 21 August 2015 at 17:07, Gareth Rushgrove <gareth@xxxxxxxxxxxxxxxxx> wrote:
>> I'd managed to get a bunch of throughts out of how head an into blog
>> post form, on the theme of operating unikernels.
>> The general gist is, assuming unikernels are awesome, how do we build
>> and run production systems based on them?
>> http://www.morethanseven.net/2015/08/21/operating-unikernel-challenges/
>> This is mainly a list of problems, I'd love to heard from anyone who
>> has done any hard thinking on any of them or cut any tools in this
>> space.
> Hi Gareth,
> A few thoughts:

Thanks for replying.

> "How do I compose several unikernels together to build an application?"
> I think you answer this later, in the Orchestration section: the same
> way we do with other VMs/containers - using Docker Compose, Ubuntu
> Juju, etc. I haven't built anything big enough to need this yet
> though.

That's my view as well (CloudFoundry or Kubernetes model would appear
to work?) but I've not seen anyone doing this yet. Which probably
means gaps exist when you actually try :) If anyone takes a run at
this I'd certainly be interested, I'm guessing Lattice
[http://lattice.cf/] might be a nice place to start?

> What does a Continuous integration or deployment pipeline look like?
> Amir gives an example in "Towards Heroku for Unikernels: Part 1 -
> Automated deployment":
> http://amirchaudhry.com/heroku-for-unikernels-pt1/

While an example of what's possible I don't think this is the highly
opinionated high-level interface that would be required to make it
easy to get started. Githooks, Makefiles and shell scripts are great
for prototypes but don't tend to make for a great experience in my
view. The skeleton is great, but only covers running unit tests and
only on Travis. Test Kitchen [http://kitchen.ci/] is maybe a nice
model to look at - as a thought experiment "what would Test Kitchen
for Mirage look like?"

> "By removing the operating system we remove things like host firewalls ..."
> I see two main uses for firewalls. One is to avoid accidentally
> exposing a host-only service (e.g. a database used by a web app in the
> same VM) and the other is to provide basic access contol between VMs
> (only the web VM can access the DB VM).
> For the first, two services in the same Mirage unikernel will
> communicate directly using OCaml datatypes. When everything is a
> library, using a network for internal communication would be crazy.

At any degree of scale though you're going to be running many
unikernels across many hosts - so some degree of network communication
is going to be required (even if you minimise it with locality). Also,
in most environments some of that integration is going to be with
non-mirage/ocaml based systems and/or not running on the same

> Also, while Linux allows any process to listen on the network, Mirage
> uses dependency injection so that only components that need network
> access will be given it.

Yup, which is great. My thoughts were mainly about the second issue...

> For the second, whatever is composing the services should configure
> the network, in my opinion. In other words, if I say I want my web
> server VM connected to a database VM, then nothing else should have
> access to the DB VM.
> I would certainly like to see a higher-level API for networking, that
> doesn't allow unexpected connections. e.g. we currently offer services
> a low-level network API like:
>   val connect : network -> ipaddr -> port -> flow
>   val listen : network -> port -> callback -> unit
> With this API, a library with network access can connect anywhere in
> the world by supplying any IP address and port number, and must handle
> its own encryption. A higher-level capability-style API could offer
> something more abstract, e.g.
>   module type SturdyRef = sig
>     type t
>     val connect : t -> flow
>   end
> Here, our web server would simply get a SturdyRef.t for the database,
> and all it could do would be to connect to it.

Agreed. I just want something like this to exist :)

I also think unikernels could make for really nice network devices
(firewalls, security controls, proxies, etc.)

Lots of people are finding the network the limiting factor when they
start down a microservices rabbit hole in my experience. How would
unikernels work with some of the newer players in this space like
Weave [http://weave.works/] or Calico [http://www.projectcalico.org/]
might be interesting to consider?

> What does debugging a system based on unikernels look like?
> There's an example here: https://mirage.io/wiki/profiling
> "As a motivating example, we'll track down a (real, but now fixed) bug
> in MirageOS's TCP stack."

From an operators point of view that's not really the same thing. The
issues I see:

* enabling it requires recompilation and redeployment (although you
could probably put this behind some sort of feature flag?)
* it's not interactive

I think the first is interesting, as the unikernel you're running
might be provided by a third party vendor and you might not have the
source code/right to modify/recompile. Or changes might required a
lengthy change approval process.

The second might be a matter of debugging at the hypervisor/xen layer
but I've limited experience there. That also raises isolation issues -
I probably want to limit access to the hypervisor more than to an
individual application instance.

I'm obviously mainly in critique mode with the post and points above.
My main interest is in getting anyone thinking about operational
problems early, in my view it's a pretty interesting set of issues for
which good solutions undoutedly exist.



> --
> Dr Thomas Leonard        http://roscidus.com/blog/
> GPG: DA98 25AE CAD0 8975 7CDA  BD8E 0713 3F96 CA74 D8BA

Gareth Rushgrove


MirageOS-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.