NAS overhaul for SMB Direct

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Alexdi

Member
Dec 7, 2012
38
3
8
I'm looking for a sanity check on a plan to upgrade my server. This is what I have:

E3-1270V2 3.5 Ghz
Supermicro X9SCM-F
2 x 8 GB DDR3
850 Evo 500GB (OS)
Adaptec 78165 RAID HBA
6 x 6 TB 7200 RPM Hitachi He6 in RAID-6
Intel X540-T2 10Gb/s
Server 2019

I'm frequently moving tens of thousands of files in 100GB+ chunks between this and my desktop machine, which is directly connected to it with the same X540 NIC. Ideally, I'd be able to run things from the server at performance levels more akin to my local SSDs.

This desire is constrained by two problems:
1. Sustained write speeds suck. For file transfers under a certain size, it'll do 550 MB/s so. Long transfers that saturate the write cache drop to 100-150 MB/s or so. Or 350 MB/s sometimes; this inconsistency is a related problem.
3. Network IOPS/latency sucks. If I share the 850 Evo and benchmark it from my desktop, it's probably 15X slower than native.

My plan to address these is:
1. Upgrade the RAID controller to an 8885Q with MaxCache 4.0.
2. Add 4 x 200GB SAS 12Gb/s SSDs in RAID-0 for cache.
3. Upgrade the network cards to something that supports SMB Direct.
4. If necessary for PCIe bandwidth or whatever, swap the Intel setup to a Ryzen 3600 / AsRock X470D4U

I'll consider this project a success if I can write to the array at 1 GB/s consistently and, for anything cached by the SSDs, R/W at 10K IOPs or better.

Would my plan achieve this? If so, what are some economical 10Gb+ NICs that satisfy the SMB Direct requirement and don't have inordinate CPU overhead or configuration requirements? While I'm using CAT6 now, SFP or Infiniband cables would be fine too. No switches here to care about.

Thanks for any thoughts.
 
Last edited:

PigLover

Moderator
Jan 26, 2011
3,184
1,545
113
IIRC, SMB Direct is not supported on desktop version of Windows - only on Windows Server. Before you commit $$$ to upgrading the NAS and replacing the NIC in your desktop you should confirm that it will work at all (or look into upgrading the OS on your desktop to Windows Server).
 

Alexdi

Member
Dec 7, 2012
38
3
8
My 'Windows Features' item list in Win10 1903 has the 'SMB Direct' box checked. No way to check if it works without the right cards, though. Running Server as a desktop OS would be possible, but IME, creates hassles with desktop programs that check the Windows version.
 

BoredSysadmin

Not affiliated with Maxell
Mar 2, 2019
1,050
437
83
Hol up. Your server has 6 mechanical 7.2k drives in raid6 and you expect to saturate 10gig link on large sequential writes?
The question of how to design 1GBs or (over 8gbs) sequencial write NAS was brought up more than once.
SMB Direct or RoCE aren't going to help you much, not to say that they are useless, they aren't, but think of them as tools to improve the latency access - it won't do you much good for large files copy.
Here some good advice from @gea our own ZFS guru:
https://forums.servethehome.com/ind...mb-sec-sequential-on-10gig.22195/#post-206930
 

Alexdi

Member
Dec 7, 2012
38
3
8
As above, the idea is to write to an SSD cache, not to the array directly. I'm under the impression MaxCache 3+ and CacheCade 2 allow this.

SMB Direct is the solution I'm pursuing to the latency problem. I've no idea how plug-and-play it would be or the most appropriate NICs. Any thoughts on that one?
 
Last edited:

BoredSysadmin

Not affiliated with Maxell
Mar 2, 2019
1,050
437
83
Assuming your raid controller SSD write caching assumption holds true, 850 EVO is a TLC drive with "SLC" cache, aka TurboWrite area and it can be around 22GB after that write speed falls down to TLC native drive speeds, which reportedly could be as low as 295MBs.
Samsung 860 EVO 500GB SATA SSD Review - Page 6 of 7 - Legit Reviews [this is a never drive that you have, I expect yours to have similar or a bit slower speeds)
Again, this isn't a networking or latency issue, but a storage performance (or lack of it) issue.
If you want a consistent high write speed, you need for something with more power, like Optane 900p or better.
SMB direct could help you to get from sustained 900MB/s to 1GBs (very roughly), but first, you need to address your storage first.
Again, going with very high-speed write cache isn't a bad idea, but instead of raid 10 SAS SSDs, I'd personally go with one of the drives from here:
https://www.servethehome.com/buyers...as-servers/top-picks-freenas-zil-slog-drives/
 

Alexdi

Member
Dec 7, 2012
38
3
8
The Evo is just the OS drive, I never read or write anything to it. (The comment about sharing it was just to temporarily benchmark how poor the IOPS are over a network without RDMA. Write endurance isn’t nearly high enough for consistent caching.)

Optane is an interesting thought. Mega-expensive though. The particular cache drives I’m looking at are Hitachi enterprise 800MM SAS or equivalent. SAS is an unfortunate requirement, I have to be able to plug them into the HBA, which is why I’m opting for at least four to bring the sustained write high enough.
 

Alexdi

Member
Dec 7, 2012
38
3
8
An update on this--
  • I upgraded the RAID controller to an 8885Q. No change in performance (and none expected).
  • I added 4 x 200GB Hitachi SSD1600MM drives in R1E as a MaxCache 3.0 device and assigned it to the array. Significant improvements in random IO, no change in sequential. Not my use case.
  • I installed PrimoCache. This allows me to saturate my 10 GBe link for as much memory as I'm willing to allocate to it. Curiously, setting a write-through L2 cache with no memory allocated shows it's only capable of 800-950 MB/s from a 2000 MB/s R0 of the SAS drives above. A 2-drive R0 only drops to 750 MB/s. Makes me wonder if an Optane drive might be a better choice.
 
Last edited:

Alexdi

Member
Dec 7, 2012
38
3
8
I think PrimoCache is what I'm looking for.

I swapped the 78165 back in and added a 970 Pro 512 GB on a 4X adapter and another 16GB RAM. Set the RAM to a write cache and much of the 970 to a 60/40 R/W split. From the remote machine, I mapped the RAID as write-though to skip SMB caching. Benchmarking the mapped drive yields this:

16GB RAM, 512GB nVME:

upload_2019-12-23_19-39-3.png

512GB nVME alone:

upload_2019-12-23_19-42-39.png

PrimoCache disabled (in the middle of an array scrub task though; down 30% sequential and 300% on 4KB writes):

upload_2019-12-23_19-48-9.png

The narrow differences between RAM and nVME seem to suggest I'm latency-limited by the network connection in these benchmarks.

In real-world transfers, arbitrarily large files copy to the RAID over the network at 1.15GB/s for 16GB, then drop to 950 MB/s for the remainder. Cached reads are similar. After the cached transfer, it's possible to manipulate the file while it's writing to the RAID. I haven't run the software long enough to say anything about how clever the read-caching algorithm is relative to MaxCache or Cachecade, but so far, this is very slick.
 

kapone

Well-Known Member
May 23, 2015
1,095
642
113
Primocache is a cache for the disk(s). RDMA is about getting data from one machine to another over the network.

disk caching is one layer below RDMA.
 

RyanCo

New Member
Jan 19, 2021
19
6
3
Primocache is a cache for the disk(s). RDMA is about getting data from one machine to another over the network.

disk caching is one layer below RDMA.
Thanks, good to know. I'm going to have to play with Primocache as well then when all these parts get in.