[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: requests for clarification



On Tue, Dec 20, 2011 at 12:14:51PM +0000, Christopher Greenhalgh wrote:
> I thought I'd use some of the vacation to try to get up to speed a bit
> with mirage, and have a few questions and a couple of tutorial comments
> so far...

Splendid! It's worth an initial note on where we are, since there's been
quite a bit of background hacking recently towards various papers, and not
all of it is merged yet.

We've patched together a fairly complete protocol stack, but found a few
deficiencies in the process:

- Bitstrings are very efficient to read in a zero-copy fashion, but writing
  constructs many small strings that are copied several times. This draft
  paper on Reconfigurable I/O [1] has a C interface that I'm adapting at the
  moment for use in Mirage. Balraj and Haris have also done extensive work
  on TCP/OpenFlow, so we have a reasonably stable base now at least!

- Profiling and characterising *why* something is slow under Xen is
  rather difficult currently, but should be easy to fix by implementing the
  gprof stubs in our microkernel, and adjusting the build system. 

- Several of the libraries should be able to be used independently of
  Mirage, most notably the xenstore implementation. Dave has pulled the
  code out (github/djs55/ocaml-xenstore), and I'm going to fix the Mirage
  build to use a git remote subrepo.  Ideally,  we should have a skeleton
  library repository that can compile for normal UNIX Lwt as well as 
  Mirage.  This is only practical for libraries which dont interact much
  with the system, but it's better than the current c&p situation.

- The most existing thing left to integrate is FRP-style I/O [2]. This
  makes all kernel structures bidirectional. For example, rather than just
  having an ARP cache, anything that queries the cache will be 'tied' to
  it, and subsequent updates (e.g. an ARP packet or timeout) will result
  in a recomputation of the upstream flow.  This is the essence of cloud
  programming, where there are lots of environmental changes (e.g. live
  relocation), and a Mirage kernel which handles them explicitly is an
  interesting experiment.

So what this means is that the repository is going to undergo some
short-term volatility through January :)  I'll likely unhook most of the
high-level protocol libraries from the build, and make the benchmark
library work with Openflow/Ethernet first, and then up to TCP. Once we are
happy that we can profile and stress test those layers, I'll bring the
DNS/HTTP/etc libraries back in the new world.

However, all this work will happen in a branch, so you can continue to
mess around in the master branch to get a feel for the system.

[1] http://anil.recoil.org/papers/drafts/2012-resolve-draft1.pdf
[2] http://ambassadortothecomputers.blogspot.com/2010/05/how-froc-works.html

> What is the status of/plans for the orm stuff?
> (https://github.com/mirage/orm - seems to be outside the current
> 'release' and tutorial scope) (plan A for learning mirage was to try
> making a web application or two)

The ORM was a bit of an early experiment. It certainly works, but the
semantics of deletion are rather tricky. It's going to be a few months
before it can be hooked back in. However, the easiest way to build a small
web app is to compile in the data into the binary directly.

> Can I check my understanding on some of the networking stuff...
> 
> -          the unix-socket version uses sockets directly and doesn't try
> to support the Ethif interface, whereas the unix-direct version does;
> this is the common low level interface (on both Xen and Unix) on which
> the ocaml ip stack is implemented (common to both targets)?

Yup.

> 
> -          The Flow and esp. Channel and Manager interfaces are common
> abstractions across all platforms? I have to say I am pretty unclear
> exactly what the role and function of the Manager is; I guess there is
> one per application, even if multiple interfaces, and there is some
> reference to "swap to shared memory" but this seems to be a promissory
> note)

The Manager was intended to be the system wide service (so yes, it could
select libvchan sharedmem instead of TCP). However, that whole interface
is going to change into a higher-level 'open a URI flow' instead. You
would just open 'tcp://foo.bar:500' or 'xio://proc3' (for Mirage-internal
calls). For now, just use the Manager from the examples but don't get too
attached to it :)

> Can someone comment on the wisdom of using the unix socket vs direct
> version? (thinking of performance as well as stability)

Sockets are there to make developing the higher level protocols easier,
without worrying about TCP bugs.

> Is anyone working on the Node (or browser) version of the networking?
> (which appears to just be an unimplemented shell)

Not really at the moment. It's a cool hack that keeps the architecture
'honest' (i.e. no C bindings), but doesn't really have a compelling
purpose beyond that. Raphael did the most recent work, but it probably
needs another week or so to fix up the I/O bindings to node.js (which uses
a Buffer abstraction instead of Javascript strings)

> 
> Some of the mirage build was broken (for me on Centos 6, anyway) by the
> recent change to some of the .sh file headers (e.g. assemble.sh) from
> #!/bin/bash to #!/usr/bin/env bash -e: /usr/bin/env: bash -e: No such
> file or directory
> 
> When I run any of the net examples as unix-direct (again on Centos 6),
> (having created tap0 explicitly using tunctl, which wasn't mentioned in
> the tutorial), it appears (from what it prints and from subsequent
> ifconfig output) to execute each time: /sbin/ifconfig tap0 10.0.0.2
> netmask 255.255.255.0 up But the mirage process is itself using ip
> 10.0.0.2, so I have to change the host interface ip back to 10.0.0.1
> (e.g.) before I can communicate with the mirage process on ip 10.0.0.2

Ah, these might be due to Haris' hacks to get OpenFlow to work. I'm paging
back in to Mirage hacking now, so will tidy this stuff up too. 

-a



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.