[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: upcoming MirageOS meeting 2022-09-07



Hi all,
Regarding the UDP #pkt/s with qubes-mirage-firewall, I did a quick test (with 
fortunately no hang or unfortunately as it doesn't give any additional clue). 
As I'm not currently at home I cannot test with my FTTH connexion but I think 
it's still relevant as I managed to get a CPU saturation with mirage.

With mirageOS as firewall (CPU is 100%):
$ iperf3 -c lon.speedtest.clouvider.net -p 5203 -u -b 0
Connecting to host lon.speedtest.clouvider.net, port 5203
[  5] local 10.137.0.20 port 56373 connected to 5.180.211.133 port 5203
[ ID] Interval           Transfer     Bitrate         Total Datagrams
[  5]   0.00-1.00   sec  51.9 MBytes   436 Mbits/sec  38090 
[  5]   1.00-2.00   sec  71.1 MBytes   597 Mbits/sec  52160 
[  5]   2.00-3.00   sec  71.8 MBytes   603 Mbits/sec  52680 
[  5]   3.00-4.00   sec  73.7 MBytes   618 Mbits/sec  54030 
[  5]   4.00-5.00   sec  73.8 MBytes   619 Mbits/sec  54080 
[  5]   5.00-6.00   sec  60.0 MBytes   504 Mbits/sec  44030 
[  5]   6.00-7.00   sec  45.0 MBytes   377 Mbits/sec  32990 
[  5]   7.00-8.00   sec  73.6 MBytes   617 Mbits/sec  53950 
[  5]   8.00-9.00   sec  70.8 MBytes   594 Mbits/sec  51920 
[  5]   9.00-10.00  sec  70.6 MBytes   592 Mbits/sec  51760 
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Jitter    Lost/Total 
Datagrams
[  5]   0.00-10.00  sec   662 MBytes   556 Mbits/sec  0.000 ms  0/485690 (0%)  
sender
[  5]   0.00-10.00  sec  57.2 MBytes  48.0 Mbits/sec  0.269 ms  437001/478947 
(91%)  receiver

with linux as firewall (CPU is around 80%):
$ iperf3 -c lon.speedtest.clouvider.net -p 5203 -u -b 0
Connecting to host lon.speedtest.clouvider.net, port 5203
[  5] local 10.137.0.21 port 49539 connected to 5.180.211.133 port 5203
[ ID] Interval           Transfer     Bitrate         Total Datagrams
[  5]   0.00-1.00   sec  82.2 MBytes   689 Mbits/sec  60240 
[  5]   1.00-2.00   sec  85.7 MBytes   719 Mbits/sec  62850 
[  5]   2.00-3.00   sec  85.2 MBytes   715 Mbits/sec  62460 
[  5]   3.00-4.00   sec  79.0 MBytes   663 Mbits/sec  57920 
[  5]   4.00-5.00   sec  84.8 MBytes   712 Mbits/sec  62210 
[  5]   5.00-6.00   sec  80.7 MBytes   676 Mbits/sec  59140 
[  5]   6.00-7.00   sec  80.3 MBytes   674 Mbits/sec  58880 
[  5]   7.00-8.00   sec  80.2 MBytes   673 Mbits/sec  58810 
[  5]   8.00-9.00   sec  80.2 MBytes   673 Mbits/sec  58830 
[  5]   9.00-10.00  sec  77.5 MBytes   650 Mbits/sec  56820 
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Jitter    Lost/Total 
Datagrams
[  5]   0.00-10.00  sec   816 MBytes   684 Mbits/sec  0.000 ms  0/598160 (0%)  
sender
[  5]   0.00-10.00  sec  57.2 MBytes  47.9 Mbits/sec  0.272 ms  549868/591780 
(93%)  receiver

As Christiano said, there may be a way for optimizations where solo5 does the 
IO, as here, once the NAT is done, it's only a matter of copying data from one 
page to another.
This should also be reproducible with a simple nat unikernel such as 
https://github.com/mirage/mirage-nat/tree/main/example . I'll try that later.

Best,
Pierre
-- 
P.



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.