[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: upcoming MirageOS meeting 2022-09-07



Hi,

Just to inform you that someone (@rand00) started to make a unikernel which permits testing a connection
and seeing the bandwidth. It's useful to test multiple backends and probably have more informations about
where is the bottleneck.


It still is experimental but at least it helps me to find some limitations about my internet connection. It may
be worthwhile to concentrate our efforts on the latter in order to finally have a tool to replace our good old

On Wed, Sep 7, 2022 at 5:18 PM Christiano F. Haesbaert <haesbaert@xxxxxxxxxxxxx> wrote:
That looks painfully slow, it's sad that iperf doesn't report packets per second but that's around ~5kpps at 1460B/frame.
I'm also surprised your sender is not saturating gigabit, but I'd have to check how iperf knows it was able to send out the packets, usually no one cares about the sender as long as you saturate the link.

I think I feel I owe an explanation since I frequently talk about how solo5 IO is slow but never explain why/how. First of all I don't mean this as a bashing, I love solo5, the code was never intended to be optimized for network performance. Also take this with a grain of salt, there can be multiple things involved and it might be a bug somewhere completely unrelated, the truth is I haven't run tests enough to understand how much of the solo5 IO can be blamed, I just repeat this because "there might be nothing wrong".

I had a quick look at the xen code for the first time now, and it's quite different from the rest, it has very little to do with how solo5 does IO the ring management and IO code is in ocaml and I can't really reason about it without a lot of time.
At this point I'd try to turn the firewall into an "expensive cable" just copy packets from input to output and get some idea of the baseline.


On Wed, 7 Sept 2022 at 16:25, <pierre.alain@xxxxxxx> wrote:
Hi all,
Regarding the UDP #pkt/s with qubes-mirage-firewall, I did a quick test (with fortunately no hang or unfortunately as it doesn't give any additional clue). As I'm not currently at home I cannot test with my FTTH connexion but I think it's still relevant as I managed to get a CPU saturation with mirage.

With mirageOS as firewall (CPU is 100%):
$ iperf3 -c lon.speedtest.clouvider.net -p 5203 -u -b 0
Connecting to host lon.speedtest.clouvider.net, port 5203
[  5] local 10.137.0.20 port 56373 connected to 5.180.211.133 port 5203
[ ID] Interval           Transfer     Bitrate         Total Datagrams
[  5]   0.00-1.00   sec  51.9 MBytes   436 Mbits/sec  38090 
[  5]   1.00-2.00   sec  71.1 MBytes   597 Mbits/sec  52160 
[  5]   2.00-3.00   sec  71.8 MBytes   603 Mbits/sec  52680 
[  5]   3.00-4.00   sec  73.7 MBytes   618 Mbits/sec  54030 
[  5]   4.00-5.00   sec  73.8 MBytes   619 Mbits/sec  54080 
[  5]   5.00-6.00   sec  60.0 MBytes   504 Mbits/sec  44030 
[  5]   6.00-7.00   sec  45.0 MBytes   377 Mbits/sec  32990 
[  5]   7.00-8.00   sec  73.6 MBytes   617 Mbits/sec  53950 
[  5]   8.00-9.00   sec  70.8 MBytes   594 Mbits/sec  51920 
[  5]   9.00-10.00  sec  70.6 MBytes   592 Mbits/sec  51760 
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Jitter    Lost/Total Datagrams
[  5]   0.00-10.00  sec   662 MBytes   556 Mbits/sec  0.000 ms  0/485690 (0%)  sender
[  5]   0.00-10.00  sec  57.2 MBytes  48.0 Mbits/sec  0.269 ms  437001/478947 (91%)  receiver

with linux as firewall (CPU is around 80%):
$ iperf3 -c lon.speedtest.clouvider.net -p 5203 -u -b 0
Connecting to host lon.speedtest.clouvider.net, port 5203
[  5] local 10.137.0.21 port 49539 connected to 5.180.211.133 port 5203
[ ID] Interval           Transfer     Bitrate         Total Datagrams
[  5]   0.00-1.00   sec  82.2 MBytes   689 Mbits/sec  60240 
[  5]   1.00-2.00   sec  85.7 MBytes   719 Mbits/sec  62850 
[  5]   2.00-3.00   sec  85.2 MBytes   715 Mbits/sec  62460 
[  5]   3.00-4.00   sec  79.0 MBytes   663 Mbits/sec  57920 
[  5]   4.00-5.00   sec  84.8 MBytes   712 Mbits/sec  62210 
[  5]   5.00-6.00   sec  80.7 MBytes   676 Mbits/sec  59140 
[  5]   6.00-7.00   sec  80.3 MBytes   674 Mbits/sec  58880 
[  5]   7.00-8.00   sec  80.2 MBytes   673 Mbits/sec  58810 
[  5]   8.00-9.00   sec  80.2 MBytes   673 Mbits/sec  58830 
[  5]   9.00-10.00  sec  77.5 MBytes   650 Mbits/sec  56820 
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Jitter    Lost/Total Datagrams
[  5]   0.00-10.00  sec   816 MBytes   684 Mbits/sec  0.000 ms  0/598160 (0%)  sender
[  5]   0.00-10.00  sec  57.2 MBytes  47.9 Mbits/sec  0.272 ms  549868/591780 (93%)  receiver

As Christiano said, there may be a way for optimizations where solo5 does the IO, as here, once the NAT is done, it's only a matter of copying data from one page to another.
This should also be reproducible with a simple nat unikernel such as https://github.com/mirage/mirage-nat/tree/main/example . I'll try that later.

Best,
Pierre
--
P.



--
Romain Calascibetta - http://din.osau.re/

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.