Advice for new TrueNAS CORE build

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

xyzzy

New Member
May 9, 2021
2
0
1
This TrueNAS CORE box is going to be used exclusively for VMware ESXi datastores via iSCSI (with sync writes enabled).

Supermicro H12SSL-NT
Epyc 7262
2 x DDR4-3200 64GB ECC DIMMs (fully populate with 8 eventually)
Noctua NH-U12S TR4-SP3 CPU cooler/fan
Seasonic Prime TX 750W power supply

Boot pool:
-- Intel S4610 SATA SSDs (240 GB) mirrored
Data pool #1 (new HW):
-- 2 Intel P5510 NVMe PCIe4 SSDs (3.84 TB) mirrored
-- 1 or 2 Intel P4801x NVMe PCIe3 SSDs (100 GB) (striped if more than 1) as SLOG
Data pool #2 (reusing existing HW):
-- 4 WD Gold 10 TB in RAID 10
-- 1 or 2 Intel S3710 SATA SSDs (200 GB) (striped if more than 1) as SLOG

NVMe SSDs will be connected to onboard SlimSAS x8 ports or Supermicro AOC-SLG4-4E4T (which uses PCIe 4.0 x16 slot). SATA disks will be connected to onboard SlimSAS x8 port (with breakout cable) or Broadcom 9305-16i.

I'm really unsure about the NICs. The P5510's have sequential specs of 6500 MB/s (read) and 3400 MB/s (write) but they won't hit that with ZFS. However, I think that pool will easily saturate a 10GbE connection. Should I go with a 25GbE NIC or try to get iSCSI multipath working with multiple 10GbE links? Or should I try a 40GbE NIC to support multiple NVMe pools like data pool #1 in the future?

The motherboard was chosen for PCIe 4.0 expansion (more NVMe drives) in the future.

The CPU was chosen as it's the cheapest Rome CPU that has full 8 channel memory bandwidth (although I'm unsure how much bandwidth really matters with TrueNAS).

Many thanks in advance!
 

xyzzy

New Member
May 9, 2021
2
0
1
Anybody got any feedback on my proposed build? Or should I have posted this under "DIY Server and Workstation Builds"?