[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Caml-list] Master-slave architecture behind an ocsigen server.



On 28 Mar 2013, at 08:47, Alain Frisch <alain@xxxxxxxxx> wrote:

> On 03/28/2013 08:37 AM, Philippe Veber wrote:
>> Hi Martin,
>> nproc meets exactly my needs: a simple lwt-friendly interface to
>> dispatch function calls on a pool of processes that run on the same
>> machine. I have only one concern, that should probably be discussed on
>> the ocsigen list, that is I wonder if it is okay to fork the process
>> running the ocsigen server. I think I remember warnings on having parent
>> and children processes sharing connections/channels but it's really not
>> clear to me.
> 
> FWIW, LexiFi uses an architecture quite close to this for our application.  
> The main process manages the GUI and dispatches computations tasks to 
> external processes.  Some points to be noted:
> 
> - Since this is a Windows application, we cannot rely on fork.  Instead, we 
> restart the application (Sys.argv.(0)), with specific command-line flag, 
> captured by the library in charge of managing computations.  This is done by 
> calling a special function in this library; the function does nothing in the 
> main process and in the sub-processes, it starts the special mode and never 
> returns.  This gives a chance to the main application to do some global 
> initialization common to the main and sub processes (for instance, we dynlink 
> external plugins in this initialization phase).
> 
> - Computation functions are registered as global values.  Registration 
> returns an opaque handle which can be used to call such a function.  We don't 
> rely on marshaling closures.
> 
> - The GUI process actually spawns a single sub-process (the Scheduler), which 
> itself manages more worker sub-sub-processes (with a maximal number of 
> workers).  Currently, we don't do very clever scheduling based on task 
> priorities, but this could easily be added.
> 
> - An external computation can spawn sub-computations (by applying a parallel 
> "map" to a list) either synchronously (direct style) or asynchronously (by 
> providing a continuation function, which will be applied to the list of 
> results, maybe in a different process).  In both cases,  this is done by 
> sending those tasks to the Scheduler.  The Scheduler dispatches computation 
> tasks to available workers.  In the synchronous parallel map, the caller runs 
> an inner event loop to communicate with the Scheduler (and it only accepts 
> sub-tasks created by itself or one of its descendants).
> 
> - Top-level external computations can be stopped by the main process (e.g. on 
> user request).  Concretely, this kills all workers currently working on that 
> task or one of its sub-tasks.
> 
> - In addition to sending back the final results, computations can report 
> progress to their caller and more intermediate results.  This is useful to 
> show a progress bar/status and partial results in the GUI before the end of 
> the entire computation.
> 
> - Communication between processes is done by exchanging marshaled "variants" 
> (a tagged representation of OCaml values, generated automatically using our 
> runtime types).  Since we can attach special variantizers/devariantizers to 
> specific types, this gives a chance to customize how some values have to be 
> exchanged between processes (e.g. values relying on internal hash-consing are 
> treated specially to recreate the maximal sharing in the sub-process).
> 
> - Concretely, the communication between processes is done through queues of 
> messages implemented with shared memory.  (This component was developed by 
> Fabrice Le Fessant and OCamlPro.)   Large computation arguments or results 
> (above a certain size) are stored on the file system, to avoid having to keep 
> them in RAM for too long (if all workers are busy, the computation might wait 
> for some time being started).

Are all of the messages through these queues persistent, or just the larger 
ones that are too big to fit in the shared memory segment, and are they always 
point-to-point streams?

We've got a similar need in Xen/Mirage for shared memory communication and 
queues, and have been breaking them out into standalone libs such as:

https://github.com/djs55/shared-memory-ring

...which is ABI-compatible with the existing Xen shared memory interfaces, and 
also an OCaml version of the transport-agnostic API sketched out in:
http://anil.recoil.org/papers/2012-resolve-fable.pdf

The missing link currently is the persistent queuing service, but we're 
investigating the options here (ocamlmq looks rather nice).

-anil




 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.