[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [MirageOS-devel] Unix.Unix_error(Unix.EBADF, "check_descriptor", "") (was: Irmin_http with Cohttp_mirage)



In the end it was an easy fix:



Sven Anderson <sven@xxxxxxxxxxx> schrieb am Di., 5. Dez. 2017 um 22:28 Uhr:
Hi all,

I finally could make a pretty small reproducer for the uncatchable exception Unix.Unix_error(Unix.EBADF, "check_descriptor", ""). I put the sources in a gist [1]. The trick to trigger the bug is to have a persistent HTTP connection to the server and reuse it for many HTTP requests. What also seems to be important is, that the request triggers another client request to an external HTTP server. (My reproducer is a very primitive static proxy, so to say.) Also it seems to depend on the amount of data that is forwarded through the unikernel. And of course it only happens with --net=socket.

After compiling and running the unikernel, it must be tortured with the following command:
$ printf 'url="" href="http://localhost:8080" target="_blank">http://localhost:8080"\n%.0s' {1..1000} | curl -v -K -
and it should fail pretty quickly.

It's a pretty racey bug, sometimes the exception is caught by my own exception handler, sometimes it's caught by Cohttp, and sometimes it is caught by the Lwt scheduler, which terminates the whole process then.

So my guess at the moment is, that there are two problems: a memory leak when reusing HTTP connections, which again triggers an exception that is not handled properly.

So, first question would be: where to file the bug? Is it a Cohttp bug? Or Lwt? Or both?

I'm happy to investigate that further, if someone explains me how to get a useful backtrace for the exception.


Cheers,

Sven

_______________________________________________
MirageOS-devel mailing list
MirageOS-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/mirageos-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.