[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [MirageOS-devel] Parallelizing writing to network devices



> On 28 Nov 2014, at 16:03, Masoud Koleini <masoud.koleini@xxxxxxxxxxxxxxxx> 
> wrote:
> 
> Thanks Anil.
> 
>> - graph the ring utilisation to see if it's always full (Thomas Leonard's 
>> profiling patches should help here)
> 
> Would you please point me out to the profiling patches?

See: http://roscidus.com/blog/blog/2014/10/27/visualising-an-asynchronous-monad/

>> - try to reduce the parallelisation to see if some condition there 
>> alleviates the issue to track it down.
> 
> Reducing the maximum number of threads running in parallel reduced CPU 
> utilization, and vm was functioning for a much longer time, but the same 
> problem occurred at the end.
> 
> It might be more useful looking at the code. Please have a look at the 
> function "f_thread" in the file uploaded on the following repo:
> 
> https://github.com/koleini/parallelisation

That's a lot of code to try and distill down a test case.  Try to cut it down 
significantly by building a minimal Ethernet traffic generator that outputs 
frames with a predictable pattern in the frame, and a receiver that will check 
that the pattern is received as expected.

Then you can try out your parallel algorithm variations on the simple Ethernet 
sender/receiver and narrow down the problem without all the other concerns.

Once the bug is tracked down, we can add the sender/receiver into 
mirage-skeleton and use it as a test case to ensure that this functional never 
regresses in the future.  Line rate Ethernet transmission has worked in the 
past, but we never added a test case to ensure it stays working.

Anil
_______________________________________________
MirageOS-devel mailing list
MirageOS-devel@xxxxxxxxxxxxxxxxxxxx
http://lists.xenproject.org/cgi-bin/mailman/listinfo/mirageos-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.