Playing with NFS4.1 Multipathing on Synology

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

nitrobass24

Moderator
Dec 26, 2010
1,087
131
63
TX
When creating a new NFS datastore I noticed that VMware only supports v3 and v4.1. Unfortunately Synology only officially support NFS v4 and not 4.1.

I was poking around my syno and the kernal they are using does in fact support 4.1. By making a simple edit I am able to enable this functionality on my 1812+ and setup a new NFS4.1 datastore on the Esx6.5 Cluster.

Before
Code:
root@DiskStation:/usr/syno/etc/rc.sysv# cat /proc/fs/nfsd/versions
+2 +3 +4 -4.1
After
Code:
root@DiskStation:/usr/syno/etc/rc.sysv# cat /proc/fs/nfsd/versions
+2 +3 +4 +4.1
How
We just have to edit the NFSd startup script, specifically line 89 from "/usr/sbin/nfsd $N" to "/usr/sbin/nfsd $N -V 4.1"
Code:
root@DiskStation:/usr/syno/etc/rc.sysv# vi S83nfsd.sh
root@DiskStation:/usr/syno/etc/rc.sysv# ./S83nfsd.sh  restart
Will post more later with some IOMeter results
 
Last edited:

nitrobass24

Moderator
Dec 26, 2010
1,087
131
63
TX
While i run the test here is my setup

Syno 1812+
3x Samsung PM853t 960GB in RAID5

Host ESX6.5
Xeon E3-1265L v2
16GB DDR3 ECC

Guest VM (where I am running IOMeter)
Win 10 x64 build 1607
1 vCPU
2048MB Ram
VMware Tools installed

IOMeter Results
iSCSI w/ Multipathing 2x 1Gb links


NFS4.1 over 2x1GB links (LACP).
 
Last edited:
  • Like
Reactions: Patrick

maze

Active Member
Apr 27, 2013
576
100
43
Are the changes persistent? - i'v done a bit of editing to my fstab and they are not.. - just wondering :)
 

nitrobass24

Moderator
Dec 26, 2010
1,087
131
63
TX
Are the changes persistent? - i'v done a bit of editing to my fstab and they are not.. - just wondering :)
Persist reboots, but not software upgrades. However if you are like me you might upgrade once in a blue moon.


Sent from my iPhone using Tapatalk
 
  • Like
Reactions: maze

PigLover

Moderator
Jan 26, 2011
3,184
1,545
113
Persist reboots, but not software upgrades. However if you are like me you might upgrade once in a blue moon.


Sent from my iPhone using Tapatalk
And if you are like me you forget all the edits that won't persist an upgrade...

Always take good notes. And store them somewhere you can find them later :)

This is good stuff, BTW. Thank you.
 

maze

Active Member
Apr 27, 2013
576
100
43
What's the purpose of them using a kernel that supports 4.1 but not taking advantage of it?
Could be a former major update that enabled 4.1 support by the kernel, but they werent ready to enable it in their software as such.. just a thought.
 

nitrobass24

Moderator
Dec 26, 2010
1,087
131
63
TX
Initial Results posted above. The caveat is I havent yet tested the NFS Multipathing because I need to do more reading on how to properly set that up with VMware. So for now I have the same links setup in LAG (LACP) group.

What was most surprising to me is that all other things being equal (Host, VMware Guest, Storage Backend, Raw Network Bandwidth) I am seeing a nearly 30% increase in IOPS and 23% reduction in IO response times.

Also Hardware acceleration is not supported with Synology NFS (any version) on VMWare 6.5. I am hoping their plug-in is updated soon as that should improve things as well.

Will update once I figure out the NFS multipathing.
 
Last edited:

nitrobass24

Moderator
Dec 26, 2010
1,087
131
63
TX
And if you are like me you forget all the edits that won't persist an upgrade...

Always take good notes. And store them somewhere you can find them later :)

This is good stuff, BTW. Thank you.
+1 for good documentation, but in this instance I am hoping to figure out where to place a startup script that will persist upgrades and can check the NFS status when booting up. This will limit the downtime of my datastores.
 

nitrobass24

Moderator
Dec 26, 2010
1,087
131
63
TX
those speeds seem low for pool of 3x SSDs

what happens if you increase the cores on the guest VM?
Here is a 15 minute run, upped the vCPUs to 4. And increased the the Datastore shares from 1000 to 5000. But this shouldnt matter, I am only running a CentOS box with DNSmasq on the same array while I run this.

IOMeter Result


I just always assumed it was being limited by the SATAII interfaces on the Syno. Here is the picture from the Syno Utilization monitor, while this is running. You can see its right around 80% the whole time this is running.
 
  • Like
Reactions: gigatexal