[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [MirageOS-devel] Error handling in Mirage
On 10 Jul 2014, at 14:13, David Scott <scott.dj@xxxxxxxxx> wrote: > This sounds sensible to me. > > When debugging one of these fatal errors (e.g. 'Disconnected' which means: > someone called 'disconnect' and then tried 'read' or 'write' afterwards) it > would be useful to have more context. For example it would probably be good > to add the device id to the exception so we know which disk it relates to. > This could help us infer that the buggy code is in the FS driver running on > disk 2 and not the irmin backend on disk 1. There might be other useful > context which is more implementation-specific. We could either just log this > to the console or we could encode it up as an Sexp.t and attach that to the > exception too? We need to be a little careful with adding the device `id` into the exception, as it's abstract in the module type and can vary by device. But I guess this is moot with exceptions -- if you're specifically catching and deconstructing it, you know the type of id since you caught it outside of a _. > Since our top-level function is in the auto generated code: > > let () = > OS.Main.run (join [t1 ()]) > > For every exception we declare in our interface, should we pre-create a > default exception handler which pretty-prints it? This could also explain to > the unlucky user that this is a bug and should be reported on the issue > tracker? It would be nice if every `job` that was registered would act as a monitor for its top-level exceptions. An exception leaking up to one job shouldn't kill another one even if they're running in the same unikernel. > BTW I've now forked the V1 interfaces into a V2, so feel free to propose > concrete changes as pull requests! Excellent! -anil _______________________________________________ MirageOS-devel mailing list MirageOS-devel@xxxxxxxxxxxxxxxxxxxx http://lists.xenproject.org/cgi-bin/mailman/listinfo/mirageos-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |