SMB vs NFS vs iSCSI - Share performance on Windows 10

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Mastakilla

Member
Jul 23, 2019
36
8
8
Hi everyone,

As I was trying to figure out which datasets / zvols and shares I best create for my home FreeNAS server, I first wanted to get a better idea of the performance of all these options. As my knowledge of these options is still pretty limited, I wanted to share my experiences, so others can tell me if I'm doing something wrong / sub-optimal and perhaps my knowledge / experiences can also be useful to others...

My FreeNAS server should not be limited by CPU (AMD Ryzen 3600X) and also network should not be limiting much (Intel X550 10GBe). I have a single pool with 8x 10TB in RAIDZ2 and 32GB RAM, no L2ARC / SLOG. For more details, see my signature.

I've run all benchmarks on my Windows 10 1909 desktop (Ryzen 3900X with 32GB RAM, Intel X540 NIC, NVMe SSD), which has direct connection (so without a router in between) to the FreeNAS server.

Config - Pool
1593446390924.png

Config - SMB
1593446423395.png
1593446438735.png

Config - NFS
1593446466190.png
1593446477816.png
Code:
C:\Users\m4st4>mount

Local    Remote                                 Properties
-------------------------------------------------------------------------------
p:       \\192.168.10.10\mnt\hgstpool\nfsds     UID=1000, GID=0
                                                rsize=131072, wsize=131072
                                                mount=soft, timeout=10.0
                                                retry=1, locking=no
                                                fileaccess=755, lang=ANSI
                                                casesensitive=no
                                                sec=sys
1593446534678.png
 

Mastakilla

Member
Jul 23, 2019
36
8
8
Performance - SMB
With many apps / VMs still open in the background - no loop - sync disabled
1593446860588.png
After a fresh reboot with no apps / VMs open in the background - 5 loops - sync disabled
1593446891283.png

With many apps / VMs still open in the background - 5 loops - sync enabled
1593446914080.png

Performance - NFS
With many apps / VMs still open in the background - no loop - sync disabled
1593446938691.png
After a fresh reboot with no apps / VMs open in the background - 5 loops - sync disabled
1593446976203.png

With many apps / VMs still open in the background - no loop - sync enabled

1593447007603.png
Performance - iSCSI
With many apps / VMs still open in the background - no loop - sync disabled
1593447029367.png
After a fresh reboot with no apps / VMs open in the background - 5 loops - sync disabled
1593447053980.png

After a fresh reboot with no apps / VMs open in the background - no loop - sync disabled
1593447075757.png
After a fresh reboot with no apps / VMs open in the background - 5 loops - sync enabled
1593447093456.png
 

Mastakilla

Member
Jul 23, 2019
36
8
8
Performance - RAID5
Just as a reference, I'm also including some benchmarks of my good old trusty storage system. This is a Intel Core i7 920 @ 3.8Ghz, 12GB RAM, LSI MR 9260-8i with 8x 4TB in RAID5. This filesystem is 99% full, so everything is probably fragmented as hell and a lot slower than it could be, but it does give an idea of "Internal Storage" vs "Network Storage".
1593447155215.png

Intel NAS Performance Toolkit Performance
I also ran the Intel NAS Performance Toolkit a couple times on all shares. Below a summary of the results:
1593447164898.png
 
  • Like
Reactions: Vit K

Mastakilla

Member
Jul 23, 2019
36
8
8
NetData comparison
I've also installed the NetData plugin in a jail and tried to pick some interesting graphs while running CrystalDiskMark.

SMB - no loop - sync disabled
1593447210703.png

iSCSI - no loop - sync disabled
1593447245181.png
NFS - no loop - sync disabled
1593447265088.png

SMB - no loop - sync disabled
1593447282505.png
iSCSI - no loop - sync disabled
1593447292957.png
NFS - no loop - sync disabled
1593447303023.png

SMB - no loop - sync disabled

1593447328336.png
iSCSI - no loop - sync disabled
1593447344025.png
NFS - no loop - sync disabled
1593447353492.png
 

Mastakilla

Member
Jul 23, 2019
36
8
8
SMB - no loop - sync disabled
1593447393903.png

iSCSI - no loop - sync disabled
1593447407081.png

NFS - no loop - sync disabled
1593447416633.png


Notes / Remarks / Conclusions
CrystalDiskMark
To be honest, I wasn't expecting much performance difference between SMB/NFS/iSCSI at all. I know some are more resource hungry, but as none should be able to starve my resources, I was expecting little to no performance difference.
So I was very surprised to see such these huge differences...

1) I'm using a 64GB test-set in CrystalDiskMark for (hopefully) getting "more accurate" worst-case-scenario scores (so FreeNAS can't use the cache in RAM all the time).

2) loop vs no loop

CrystalDiskMark scores are A LOT higher when running it in a loop. I guess this is also because of FreeNAS caching more in the RAM when running it in a loop. The strange thing is that simply running it multiple times after each other without a loop doesn't provide the same high scores as when running it in a loop. Is the RAM cache "cleared" so quickly already?
Especially when using iSCSI, the difference is HUGE.
It seems to be (at least partly) because ZFS is still completing its writes, when the read-test already starts...

3) NFS performance is horrible
I'm using "Client for NFS" that is included in Windows 10. As far as I could find, this NFS client only supports NFSv3. I wasn't able to enable / force NFSv4 on it. I'm not sure if there are better 3rd party NFS clients for Windows that do support NFSv4 and have better performance...?

4) iSCSI and SMB are both "ok" I guess?
It seems like SMB is a little better for sequential transfers and iSCSI might be better for random transfers. But I'm not sure if it is ok to draw much conclusions out of this... Should I best compare looped results with looped results? Or non-looped with non-looped?

5) "Internal Storage (hardware RAID5)" vs "Network Storage (FreeNAS RAIDZ2)"
Also here I'm not sure if I should be drawing much conclusions, as my RAID5 is 99% full. But on first sight, "Network Storage" seems faster in all regards than "Internal Storage". Not sure if there are specific aspects which CrystalDiskMark doesn't test (like latency?) that could be faster on the "Internal Storage"?

6) Enabling sync makes write speeds unusable slow
Do I understand it correctly that having sync enabled is especially important when using the pool for block storage, such as iSCSI, as having sync disabled could potentially destroy the partition. While for NFS and SMB, the maximum you can loose is the file(s) you were transfering at the time of the powerloss?

Intel NAS Performance Toolkit
As you can see, the results are EXTREMELY inconsistent, so I'm not sure if it is usable at all. iSCSI performs unrealistically / impossibly well. I suspect this must be because of extreme compression or something?
When starting NASPT, it does warn that results will probably be unreliable because I have more than 2GB RAM (32GB actually :p).

NetData comparison
1) CrystalDiskMark phases

CrystalDiskMark first prepares the benchmark by writing (I guess) a 64GB test file(s) to the disk to test. This takes about 2 minutes
Then it runs the read tests, starting with 2 sequential tests and then 2 random access tests.
Finally it runs the write tests, starting with 2 sequential tests and then 2 random access tests.
You can use the "Disk Usage" graph below to see in which part of the test CrystalDiskMark is around which time.

2) CrystalDiskMark duration vs benchmark results
SMB took 4m20s
iSCSI took 6m10s
NFS took 5m50s
So although the benchmark result of iSCSI is better than NFS, the test NFS benchmark completed faster than the iSCSI benchmark. And although I don't know the actual times, by checking the graphs, I can see that it is not just the preperation that takes longer, also the test itself. This is weird...

3) The reason why running a CrystalDiskMark loop is faster than a non-loop
In the "Disk Usage" graph I now also clearly saw why running CrystalDiskMark in non-loop is slower... It is because it directly starts with the read-test, after "preparing" the benchmark (which is writing to the disk). In the "Disk Usage" graph, you can clearly see that ZFS is still in progress of writing, while it already starts the read test. For a proper ZFS test, CrystalDiskMark should delay its read test for a couple seconds, so that ZFS can complete its writing to the disk...

4) CPU usage comparison (this is a bit of guess / estimation work,by looking at the graphs)
SMB: During preperation not so much fluctuation, about 20% average, max 55%. During read-test about 3%. During write-test about 20% average, max 35%
iSCSI: During preparation much fluctuation, about 25% average, max almost 60%. During read-test about 4%. During write-test about 20% average, almost 60% max
NFS: During preparation much fluctuation, I think less than 10% average, max 20%. During read-test about 2%. During write-test about 10%, max 20%.
Actually the CPU usage during the read-test is too small / short / poluted by the writes still going on in the background, to be useable at all...
NFS uses less CPU, at least partly, because of being slower. So not sure how comparable this result is either.
iSCSI seems a little bit more CPU hungry than SMB, but not by much...

5) Disk usage comparison (this is a bit of guess / estimation work,by looking at the graphs)

SMB: During preparation pretty consistent about 700MB/sec. During read-test 1 peak of about 900MB/sec. During write-test about the same as during preparation.
iSCSI: During preparation although the minimum seems similar to SMB the peaks are A LOT faster (almost 2GB/sec), but still it took longer to prepare. During read-test 1 huge peak of 2GB/sec. During write-test about the same as during preparation.
NFS: During preparation much short fluctuations with similar maximums (730MB/sec) to SMB, but a lot lower minimums (50MB/sec). During read-test 2 peaks of about 200MB/sec. During write-test a little less fluctuations than during prepare but slow average (400MB/sec).
As iSCSI "Disk Usage" seems to reach unrealisticly high peaks, I'm suspecting that much of this transfer might be highly compressable meta-data because of it being block storage or something.
The results don't seem very comparable...

6) ARC hits comparison (looking only at the read tests)
SMB: Peaks of 40% / 50% during sequential and around 30% during random access
iSCSI: Peaks of 45% / 30% during sequential and around 30% during random access
NFS: Peaks of about 20% during both sequential and random access

7) NetData freezes during iSCSI benchmark
I also noticed that NetData refreshed nicely every second when benchmarking SMB and NFS, but froze for about 3-5 seconds constantly when benchmarking iSCSI.
Not sure if iSCSI is causing performance issues also outside of the NetData plugin, but it is concerning...

As my signature isn't easily found, below are the specs of my Storage Server

OS
: FreeNAS 11.3U3.2
Case: Fractal Design Define R6 USB-C PSU: Fractal Design ION+ 660W Platinum
Mobo: ASRock Rack X470D4U2-2T NIC: Intel X550-AT2 (onboard)
CPU: AMD Ryzen 5 3600 RAM: 32GB DDR4 ECC (2x Kingston KSM26ED8/16ME)
HBA: LSI SAS 9211-8i HDDs: 8x WD Ultrastar DC HC510 10TB (RAID-Z2)
Boot disk: Intel Postville X25-M 160GB
 
Last edited:

mervincm

Active Member
Jun 18, 2014
159
39
28
Thank you so so much for this. I have a system right now running Xpenology and was looking at more "legitimate" options, FreeNas being one of them.

This is what I see today from Windows 10 to with my existing storage system i5-8600K,64GB, Intel 10G, 1500MTU

80% full 7x8TB Ironwolf in R5, Intel SSD750 as r/o cache, exclude syncronous
1595893296536.png

25% full 6x1TB Micron 5100 Pro in R5 with running vms and containers
1595894098350.png

If I could maintain similar performance, perhaps by adding a 280GB Octane for ZIL / L2ARC I might make the move.