[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] XEN 4.3.1 network performance issue



> On Tue, Dec 24, 2013 at 05:06:37PM +0200, NiX wrote:
>> Hi. I am running a XEN VM in PV mode. This VM has four cores enabled. VM
>> network config is as follows:
>>
>> vif = ["ip=127.0.0.1,mac=XX:XX:XX:XX:XX:XX,bridge=br0,rate=100Mb/s"]
>>
>> Dom0 is dual XEON X5450 with 16GB of RAM.
>>
>> I can barely zmap at 10mbps ~ 12k packets/second on this VM
>>
>
> My google-fu tells me that zmap sends out gigantic amount of TCP SYN
> packets in its default configuration. It is possible the ring size
> between frontend and backend is the bottleneck. Data copying from
> frontend to backend might also be a problem.
>
> You can probably patch your Dom0 and DomU kernel with multi-page ring
> support to see if it makes it better.
>
> Wei.
>

I was not able neither to exceed 10k packets/second with a lightweight
equivalent software and netback/0 fully utilized one 3GHz core.

Of course I had tweaked both host/vm network because the default limits
somewhat cannot properly handle 10k or more packets/second due to the
default 1k open files limit and so forth

Well you know, basic users can manage with 1k packets and/or connections
second but I am not a basic user ;) Is there any handy link on how to use
multi-page ring feature. I am going to test that.

I think this is something to test/compare with another virtualization
software's as well.

I had tweaks as follows on both host/guest:

cat /etc/security/limits.conf

root hard nofile 102400
root soft nofile 102400

ulimit -a
open files                      (-n) 102400

cat /etc/sysctl.conf

net.ipv4.tcp_moderate_rcvbuf = 0
net.ipv4.tcp_window_scaling = 1
net.ipv4.tcp_timestamps = 0
net.ipv4.tcp_sack = 1
net.ipv4.tcp_fack = 1
net.ipv4.tcp_dsack = 1
net.ipv4.tcp_app_win = 8
net.ipv4.tcp_adv_win_scale = 1
net.ipv4.tcp_frto = 1
net.ipv4.tcp_no_metrics_save = 1
net.ipv4.ip_no_pmtu_disc = 1
net.core.rmem_max = 8388608
net.core.wmem_max = 8388608
net.core.rmem_default = 4129093
net.core.wmem_default = 4129093
net.ipv4.tcp_rmem = 4129093  6566366 8388608
net.ipv4.tcp_wmem = 4129093  6566366 8388608
net.ipv4.tcp_mem =  8388608 8388608 8388608
net.core.netdev_max_backlog = 3000
net.core.optmem_max = 200000
vm.min_free_kbytes = 16384
net.ipv4.tcp_low_latency = 0
net.ipv4.route.flush = 1
net.ipv4.netfilter.ip_conntrack_max = 4194304

# Force gc to clean-up quickly
net.ipv4.neigh.default.gc_interval = 3600

# Set ARP cache entry timeout
net.ipv4.neigh.default.gc_stale_time = 3600

# Setup DNS threshold for arp
net.ipv4.neigh.default.gc_thresh3 = 4096
net.ipv4.neigh.default.gc_thresh2 = 2048
net.ipv4.neigh.default.gc_thresh1 = 1024


_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxx
http://lists.xen.org/xen-users


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.