[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [MirageOS-devel] OPW intern checking in!




On 06/08/2014 01:13 PM, Anil Madhavapeddy wrote:
On 2 Jun 2014, at 17:00, Mindy <mindy@xxxxxxxxxxxxxxxxxxx> wrote:

Hi, folks!  Here's a quick summary of what I've been up to

Late last week: dug more into observed lack of FINs from listening unikernels, 
found the problem, and submitted a pull request to mirage-tcpip
This had Balraj and me scratching our collective heads about how it regressed, 
since we're *sure* it used to work

It worked just fine when Mirage is the client initiating the TCP connection (different codepath for listens vs connections), so that might be it?

(famous last words :-). Balraj, any thoughts about it after your Friday 
investigations?

This does highlight the importance of getting a regression test infrastructure 
for networking in place though, since TCPIP in particular is a very interlocked 
protocol.  Any thoughts you may have about this using scapy would be 
interesting...

My first thought is that I don't know how to get the workflow I've been using (start up a unikernel in Xen and throw a bunch of stuff at it from scapy) set up in Travis, but if that doesn't seem like obviously the wrong thing to do, I can look into it. I would still like to look at Quickcheck as well.
Meanwhile, I've released it as tcpip.1.1.3 into OPAM: 
https://github.com/ocaml/opam-repository/pull/2207
\o/

Also, an update - Last week I wrote up another Treaty of Westphalia on finding a TCP bug and made Mirage implementations of chargen, discard, and echo for use in testing the TCP stack more directly. Today, I'm planning on getting a whole bunch of data from them and (I hope) finding some interesting results.

Thanks,
Mindy

_______________________________________________
MirageOS-devel mailing list
MirageOS-devel@xxxxxxxxxxxxxxxxxxxx
http://lists.xenproject.org/cgi-bin/mailman/listinfo/mirageos-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.