End of Year NAS Build

fastmachine

New Member
Mar 5, 2014
8
1
3
Looking at building a new NAS server for 2021. I looked through the forum and this seems like it should work.

Build Name: 2021 in with the new NAS Server

Operating System/ Storage Platform: Debian/ZFS
CPU: AMD EPYC 7232p
Motherboard: Supermicro H12SSL-C
Case: SuperChassis 836BE1C-R1K23B
CPU Cooler: Noctua NH-D9L 46.44 CFM CPU Cooler
Memory: G.Skill Trident Z 32 GB (8 x 8 GB) DDR4-3400 CL16 Memory
Storage: Western Digital Red Pro 6 TB 3.5" x 8
Boot Drive: TBD


Usage Profile:
Mainly for file storage, backups and holding some movie files. Just serving files via SMB, nothing complicated.

Other Information: This will be my third NAS/ZFS box, but the first rack mountable. I am thinking of using this has a hobby/sync device so I can wait a little bit if I need to for drivers and such. I am planning on upgrading it over the years to add more drives, etc. Hence the server motherboard.

Quick Questions:
  • Since this will be my first rack mounted system, am I missing anything I need besides cables?
  • Should the H12SSL work with Debian? I can't seem to find details confirming this.
  • Any other suggestions?
Thanks. For you help. (And sorry if I missed an obvious post the answers the above, I didn't see one.)
 

itronin

Well-Known Member
Nov 24, 2018
928
602
93
Denver, Colorado
US based or elsewhere?

Are you intending for this to do more than serve up files?

Are you buying everything new or mix of new and used parts?
The way you have this spec'ed I'm thinking new. If you have the funds and budget then new is very nice.
If you don't want to tinker and piecemeal it together then new is a good choice.

Noise?
Is this server going to be in a noise isolated or noise sensitive area?
FWIW PSU;s spec'ed in that SC are pretty loud.

Features?
SAS3 expander - check.
SAS3 on motherboard - check.
3U CPU H/S - check.
Can add 2 rear 15mm 2.5" hot swap bays - check.
Can add 2 front 7mm 2.5" hot swap bays - check.

I'm a fan of the 836 as I think its a good solution balancing tradeoffs.

Boot Drive: do you need HW raid 1 for your boot drive(s)?

Suggestion: You might want to look at the used enterprise SATA or SAS SSD market for boot drives. Intel DC S35/36/37xx (with PLP) or HGST
 

fastmachine

New Member
Mar 5, 2014
8
1
3
US based or elsewhere?
US based.

Are you intending for this to do more than serve up files?
Just out of curiosity for a simple file server why go with the EPYC 7232P seems way overkill
For right now just files, but I wanted to have options for more advanced stuff in the future.

Are you buying everything new or mix of new and used parts?
New parts, but I will look at the used drive recommendation below.

Noise?
Is this server going to be in a noise isolated or noise sensitive area?
It will be in an isolated area.


Boot Drive: do you need HW raid 1 for your boot drive(s)?
Not at this time. I am used to just using a USB flash drive, so this will be quite an upgrade.

Suggestion: You might want to look at the used enterprise SATA or SAS SSD market for boot drives. Intel DC S35/36/37xx (with PLP) or HGST
Will check it out.

Thanks for all the help. I always feel so overwhelmed when it comes time for a new server.
 

itronin

Well-Known Member
Nov 24, 2018
928
602
93
Denver, Colorado
Missed that this board has qty 2 m.2 pcie x4 onboard.

SLOG options to think about: Intel P4801x or Intel Optane 900P both in m.2
(and qty 2) if you want the SLOG mirrored.

You could also use the m.2's for system/boot/fast storage as well and do AAIC for SLOG though you'd be burning lanes so maybe use a dual or quad m.2 carrier board and could just as easily use a dual or quad carrier for system/boot/fast storage... so many options with your motherboard choice.

If you use m.2's I'd heatsink them though.
 

fastmachine

New Member
Mar 5, 2014
8
1
3
SLOG options to think about: Intel P4801x or Intel Optane 900P both in m.2
(and qty 2) if you want the SLOG mirrored.
I didn't think about that. Does the 905P work with AMD? I was a little confused on that. Looking around would the Samsung 980 PRO PCIe 4.0 NVMe SSD 250GB work better as it is PCI Express 4.0, while I think most of the non-DCPMMs Intel's are PCI Express 3.0?

But thanks either way. I hadn't gotten all the way down to looking at SLOG and it looks very helpful/powerful.
 

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,333
1,791
113
CA
I didn't think about that. Does the 905P work with AMD? I was a little confused on that. Looking around would the Samsung 980 PRO PCIe 4.0 NVMe SSD 250GB work better as it is PCI Express 4.0, while I think most of the non-DCPMMs Intel's are PCI Express 3.0?

But thanks either way. I hadn't gotten all the way down to looking at SLOG and it looks very helpful/powerful.

Yes the 905P works with AMD. I use one in my AMD based desktop\workstation :)
 

gea

Well-Known Member
Dec 31, 2010
2,817
975
113
DE
I didn't think about that. Does the 905P work with AMD? I was a little confused on that. Looking around would the Samsung 980 PRO PCIe 4.0 NVMe SSD 250GB work better as it is PCI Express 4.0, while I think most of the non-DCPMMs Intel's are PCI Express 3.0?

But thanks either way. I hadn't gotten all the way down to looking at SLOG and it looks very helpful/powerful.
The Samsung is a bad Slog
To not repeat, read https://forums.servethehome.com/ind...-hyperx-for-os-slog-drives.31223/#post-288942

You do not need an Slog mirror unless this is a ultra critical production system. If an Slog fails the system will simply revert to onpool ZIL logging. The result is only a degraded performance. Only when a system crashes and the Slog fails at this moment then a loss of committed writes can happen.

Your board will give superiour performance especially when it comes to encryption.
I have tried a similar setup and it was up to twice as fast as a same priced system from last year, https://forums.servethehome.com/ind...4110-vs-amd-epyc-7302-on-a-sm-h12ssl-c.31008/

I have used the 16 core Epyc but for a pure filer I would expect a similar performance with the 8core one.

If this is a simple filer, you do not need an Slog. If you add virtualisation, the Slog becomes mandatory on a disk based pool but you may be better use an NVMe pool for VMs. With 64GB RAM you also do not need an L2Arc (extension of the rambased readcache)
 
Last edited:
  • Like
Reactions: T_Minus

new2town

New Member
May 28, 2020
3
0
1
Just my 2-cents, but you may want to look at enterprise equipment on eBay. I built my NAS in a Dell r730xd LFF using 6gbps SATA drives and Intel DC 36XX for cache/log. You can get dual Xeon (6, 8, 10-core) with tons of DDR4 ECC ram for under $1k. My build can saturate the 10gb NICs on sustained reads/writes... in terms of performance and reliability for the price, it's hard to beat the last-gen enterprise stuff. There are tons of 2U to 4U rackmount servers with SAS3 backplanes on eBay for really cheap.

If you're not planning to do 10Gb or faster, the onboard gigabit NICS on the Supermicro is going to be your bottleneck for any NAS use case not the drives/RAM/CPU. You could go with even older enterprise equipment and match the real-world performance (i.e. saturate 1GbE network) of this build.

If you're worried about noise / power consumption, YMMV with enterprise equipment.
 

fastmachine

New Member
Mar 5, 2014
8
1
3
Thanks. I think I am going to make my New Year's Resolution to be to look more at the used enterprise equipment. I always get intimidated about the different vendors and models and such.