Running out of ideas if anyone has any suggestions it would be greatly appreciated.
Have an Ubuntu 16.04 VM running on Proxmox with an LSI SAS card passing through to JBOD. Disks are pooled via Mergerfs and shared via NFS and Samba with other VM's on this and another server. Locally seeing 500-600MB/s write speeds and all is well. Samba/CIFS shares on other VM's see ~300MB/s. Unfortunately, not able to get much beyond ~110MB/s on NFS.
Servers are physically connected via 1Gb through a switch along with a second 10Gb direct link (on different subnets). Validated all interfaces are using a standard 1500 MTU. iPerf tests between physical boxes to/from this NAS VM show speeds within 20% expectation of those peak rates for overhead. VM's on the same server are much faster.
Testing using dd:
Fairly common NFS export:
..and corresponding mount Options on the client side:
Tried playing with rsize/wsize:
Same symptoms on Ubuntu default kernel/tuning settings. But updated to the following with no change:
Have an Ubuntu 16.04 VM running on Proxmox with an LSI SAS card passing through to JBOD. Disks are pooled via Mergerfs and shared via NFS and Samba with other VM's on this and another server. Locally seeing 500-600MB/s write speeds and all is well. Samba/CIFS shares on other VM's see ~300MB/s. Unfortunately, not able to get much beyond ~110MB/s on NFS.
Servers are physically connected via 1Gb through a switch along with a second 10Gb direct link (on different subnets). Validated all interfaces are using a standard 1500 MTU. iPerf tests between physical boxes to/from this NAS VM show speeds within 20% expectation of those peak rates for overhead. VM's on the same server are much faster.
Testing using dd:
Code:
dd if=/dev/zero of=/media/pool/downloads/test bs=1M count=1024
Code:
/media/pool *(rw,no_root_squash,insecure,no_subtree_check,fsid=101)
Code:
192.168.0.44:/media/pool /media/temp nfs rw,intr,hard,async,retrans=2,noatime,rsize=8192,wsize=8192,vers=3,timeo=600 0 0
- rsize=8192,wsize=8192
- 43MB/s
- rsize=32768,wsize=32768
- 64.6MB/s
- rsize=65536,wsize=65536
- 84.8MB/s
- rsize=1048576,wsize=1048576
- 107-120 MB/s
- rsize=131072,wsize=131072
- 97.7MB/s
- rsize=524288,wsize=524288
- 101 - 104 MB/s
- rsize=4194304,wsize=4194304
- 111 - 110 MB/s
Code:
#defaults,direct_io,allow_other,fsname=mergerfsPool,category.create=epmfs 0 0
#defaults,direct_io,func.getattr=newest,allow_other,minfreespace=50G,fsname=mergerfs,category.create=epmfs,intr,readdir_ino,noforget 0 0
#defaults,direct_io,func.getattr=newest,allow_other,minfreespace=50G,fsname=mergerfs,category.create=epmfs 0 0
#defaults,direct_io,func.getattr=newest,allow_other,use_ino,minfreespace=50G,fsname=mergerfs,category.create=epmfs 0 0
#defaults,direct_io,allow_other,minfreespace=20G,fsname=MergerFS,category.create=ff 0 0
#defaults,allow_other,use_ino,func.getattr=newest,category.create=epmfs,moveonenospc=true,minfreespace=50G,fsname=mergerfsPool,dropcacheonclose=true 0 0
#defaults,allow_other,minfreespace=20G,fsname=mergerfsPool,category.create=epmfs,intr,readdir_ino,noforget 0 0
Code:
net.ipv4.tcp_syncookies = 0
net.ipv4.tcp_keepalive_time = 600
net.ipv4.tcp_synack_retries = 3
net.ipv4.tcp_syn_retries = 3
net.ipv4.tcp_rfc1337 = 1
net.ipv4.ip_local_port_range = 1024 65535
net.ipv4.conf.all.log_martians = 1
sysctl: cannot stat /proc/sys/net/ipv4/inet_peer_gc_mintime: No such file or directory
net.ipv4.tcp_ecn = 0
net.ipv4.tcp_window_scaling = 1
net.ipv4.tcp_timestamps = 1
net.ipv4.tcp_sack = 1
net.ipv4.tcp_fack = 1
net.ipv4.tcp_dsack = 1
net.ipv4.ip_forward = 0
net.ipv4.conf.default.rp_filter = 0
sysctl: cannot stat /proc/sys/net/ipv4/tcp_tw_recycle: No such file or directory
net.ipv4.tcp_max_syn_backlog = 20000
net.ipv4.tcp_max_orphans = 9297
net.ipv4.tcp_orphan_retries = 1
net.ipv4.tcp_fin_timeout = 20
net.ipv4.tcp_max_tw_buckets = 743424
net.ipv4.tcp_no_metrics_save = 1
net.ipv4.tcp_moderate_rcvbuf = 1
net.ipv4.tcp_rmem = 4096 87380 16777216
net.ipv4.tcp_wmem = 4096 65536 16777216
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
net.core.netdev_max_backlog = 2500
net.core.somaxconn = 65000
vm.swappiness = 0
vm.dirty_background_ratio = 5
vm.dirty_ratio = 15
vm.min_free_kbytes = 59503
fs.file-max = 371712
fs.suid_dumpable = 2
kernel.printk = 4 4 1 7
kernel.core_uses_pid = 1
kernel.sysrq = 0
kernel.msgmax = 65536
kernel.msgmnb = 65536
kernel.shmmax = 5483866521
kernel.shmall = 1487594