First Truenas Core build, suggestions needed!

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

GeeK

New Member
May 6, 2023
7
2
3
Hi,

I am building the first Truenas Core NAS device for my upcoming homelab, I'd like to have some recommendations for missing hardware options.

Purpose of use
- Home NAS for my family
- Mainly a media server (Movies, photos, office stuff etc.)
- Only NAS functions, all applications, VMs, Docker, etc. run on a separate Proxmox server in the same rack

Build goals
- As reliable as possible
- Slightly overkill, but still "energy-efficient" build
- Snappy experience when browsing and moving files

Current state of build:
Feel free to suggest hardware which is not acquired yet!

Used stuff is ok, but should be available in the EU. I try not to buy anything from China because of the risk of a cheap copy.
I prefer server grade components.


Acquired
Not acquired

CaseLogic Case SC-4324S - 24-Bay 4U case (6Gb/s backplanes)
CPUIntel Xeon Silver 4112 (4c/8t)
CPU CoolerNoctua NH-D9 DX-4677 4U + 3647 mounting kit from Noctua (Free)
MBSupermicro X11SPL-F
RAM6 x Samsung 32GB DDR4 2400 MHz ECC REG M393A4K40CB1-CRC (192GB total)
PSUSeasonic 650W PRIME TX-650
SLOGIntel OPTANE 118GB SSD P1600X (might add later if something needs sync writes)
L2ARCCan I use consumer grade NVME drives? Does the mirror bring any advantage? If I use special VDEV, I guess L2ARC is not needed? Otherwise, the idea would be to use L2ARC in "metadata only" mode.
Special VDEVCan I use consumer grade NVME drives in 3 or 4 way mirror? As far as I understand, special devices are normal VDEV (not a logging device), so the usage is not greater than with any other part of main pool? For this reason, there seems to be no need for an optane write duration?
PoolStart with 6 x Seagate Exos X 20TB (RaidZ2) OR striped mirror, which I can start with one pair of 20TB disks.
What is your recommendation?
6 wide RaidZ2 and striped Mirror do not have a big difference in the usable space. The expandability of the Mirror and starting with a just one pair of disks would be attractive, because it will probably take me quite a while to fill even one pair. RaidZ2, on the other hand, would bring more peace of mind.
BootServethehome blog recommending the WD Blue NVME drive (~35€ Amazon.de) I prefer a widely and well-known brands
HBAWould prefer a one 24i HBA, LSI SAS 9305-24i could be a good option, but 12Gb/s is useless in my situation.
10 GbE NICWill add later but will take suggestions




Last edited: Yesterday at 8:47 AM



Reply

Report Edit Delete
 

louie1961

Active Member
May 15, 2023
221
96
28
FYI...I doubt if it changes your hardware requirements, but IX systems recently announced it is deprecating the versions of TrueNAS based on BSD (i.e. TrueNAS Core) and moving all of their software to Linux in the near future.

On a side note...WOW! Talk about overkill. Intel Xeon Silver, 192GB of RAM, SLOG, L2ARC and special VDEVs? That is way more than "slight overkill" for your intended purposes. I mean a 4 or 8 bay NAS from one of the major vendors could do the same tasks for a lot less money and energy. But as they say, there's no kill like overkill!

Purpose of use
- Home NAS for my family
- Mainly a media server (Movies, photos, office stuff etc.)
- Only NAS functions, all applications, VMs, Docker, etc. run on a separate Proxmox server in the same rack
 

nexox

Well-Known Member
May 3, 2023
1,208
557
113
The m.2 slot on that board connects through the PCH, not directly to the CPU, it's fine for a boot drive (as is the p1600x) but might not be what you want for minimum latency. The other good choice for boot drives are Supermicro SATA DOMs, they plug directly into the orange SATA ports, which provide power, and don't take up much space. I don't know details about ZFS stuff, but it's generally a bad idea to use consumer SSDs in a server. 12G has been the SAS standard for 10 or 12 years now, any slower HBA is probably just too old to bother with. This is probably also a point to consider looking at a SAS expander rather than a high port count HBA, and then to look at a chassis that has the expander built into the backplane.
 
  • Like
Reactions: GeeK

GeeK

New Member
May 6, 2023
7
2
3
On a side note...WOW! Talk about overkill. Intel Xeon Silver, 192GB of RAM, SLOG, L2ARC and special VDEVs? That is way more than "slight overkill" for your intended purposes. I mean a 4 or 8 bay NAS from one of the major vendors could do the same tasks for a lot less money and energy. But as they say, there's no kill like overkill!
No doubt an even less powerful hardware would be just fine. However, the main components of this hardware when bought used cost almost the same as a more suitable (e.g. Xeon-D) embedded motherboard would cost used in the EU.

Xeon Silver offers significantly better expansion possibilities, and according to this review there SHOULD not be much difference in power consumption either.

 

nexox

Well-Known Member
May 3, 2023
1,208
557
113
Hmm, I hadn't seen that detail about the 4112 using less than the stated TDP, maybe for $10 it would be worth comparing one to the 4114 I am using in my fileserver build...
 

Tech Junky

Active Member
Oct 26, 2023
593
207
43
Purpose of use
- Home NAS for my family
- Mainly a media server (Movies, photos, office stuff etc.)
- Only NAS functions, all applications, VMs, Docker, etc. run on a separate Proxmox server in the same rack

Build goals
- As reliable as possible - UPS
- Slightly overkill, but still "energy-efficient" build
- Snappy experience when browsing and moving files
Really? Spend a lot on the core HW to be a file server?

You don't need all of this to do that. If you have other plans for the HW that's a different story. You could make a FS out of a PI and a DAS for a whole lot less. In fact if you're building your other server just hook up the DAS to that for storage. It's not like you need to push TB's of data around to stream to devices at max 25-100mbps.

SMB/FS isn't intensive unless you over think it and do dumb stuff with it. Mirroring doesn't require a whole lot as it just copies to additional drives w/o any calculations or parity needed.

M2's are a waste of time and money if you're going to do this long term and actually need the speed and capacity. Look into the U.2/U.3 drives as they're a better deal with the same performance. 8TB M2 runs you $800 where a U.x would only be ~$400 and the U drives go beyond to 15/30/60TB if you have deep enough pockets.
 

louie1961

Active Member
May 15, 2023
221
96
28
No doubt an even less powerful hardware would be just fine. However, the main components of this hardware when bought used cost almost the same as a more suitable (e.g. Xeon-D) embedded motherboard would cost used in the EU.

Xeon Silver offers significantly better expansion possibilities, and according to this review there SHOULD not be much difference in power consumption either.

Maybe its a matter of perspective, as I don't have nearly the storage needs of most of the members here seem to have. I consume less than 1 TB of disk space for all of my important data (pictures, documents, Proxmox backups, etc.). I don't have a media server, nor any media libraries of any significant size. That being said, I have loaded TrueNAS scale on a $400 2 bay Terramaster NAS, with a N5105 CPU, 32GB of memory, and dual 2.5gbe NICs in a LACP arrangement to my switch. It is more than fast enough for my needs and only consumes 14 watts. I have the exact same needs for my NAS that you outlined. Other than ECC memory, I am not sure what your build would give me that the $400 Terramaster running TrueNAS doesn't (other than a higher electric bill ;) )
 
  • Like
Reactions: Tech Junky

Tech Junky

Active Member
Oct 26, 2023
593
207
43
media server
This is where it gets interesting typically if you have crappy clients that can't handle native file formats and require transcoding. In this case adding a cheap GPU like the A380 and running them through Hand Brake to a format they can play without intervention makes more sense. I convert all of my stuff after switching from Intel to AMD using the A380 and it consumes very little energy compared to the CPU only process and cuts time down to 1/8th that of the CPU.

I used to use another program though that would automate the process but, they went to a subscription update model even with a paid license.
 

nasbdh9

Active Member
Aug 4, 2019
180
107
43
1. suggestion is to use 9500-8i+82885T instead of HBA with 24i interface.
2. Please confirm whether synchronous writing really exists in your usage scenario, otherwise it will be meaningless. Any NVMe SSD can do a small amount of synchronous writing.
3. Always choose a high-frequency CPU. Multi-core and HT are absolutely not necessary options for storage systems (I even choose to turn off HT). For example, on LGA3647 platform, the 2.3G and 3.5G+ CPU frequencies are used in samba and iscsi scenarios. The performance gap can be as high as 60%.
4. L2ARC makes sense, but there is less need to consider multiple devices. You can manually create multiple partitions through one device to improve performance utilization.
5. Special vdev is necessary when there are a large number of HDDs. If you consider expanding to 12x raidz2, for example, my suggestion is to budget 1.5G of special vdev based on 1T of hard disk space. Special vdev allocation block recommendations It's 32K.
6. SATA is the most affordable choice for the boot disk. It does not need to be connected to an HBA. The onboard SATA is enough.
7. Please make sure your power adapter has enough 5V current.
8. Don't consider 10G network card, please buy mcx4121a directly.
 

mattventura

Well-Known Member
Nov 9, 2022
501
254
63
Intel OPTANE 118GB SSD P1600X (might add later if something needs sync writes)
You would want to have redundant drives for SLOG.

Can I use consumer grade NVME drives? Does the mirror bring any advantage? If I use special VDEV, I guess L2ARC is not needed? Otherwise, the idea would be to use L2ARC in "metadata only" mode.
No reason to use consumer grade drives when you can get enterprise grade drives for cheap. You don't mirror these - it's a bit more analogous to a RAID 0. If one dies, you don't lose any data, because it was only caching it from somewhere else.

Can I use consumer grade NVME drives in 3 or 4 way mirror? As far as I understand, special devices are normal VDEV (not a logging device), so the usage is not greater than with any other part of main pool? For this reason, there seems to be no need for an optane write duration?
You're correct in that you need (really want) these to be mirrored, since losing them will probably kill your pool. If you have a special metadata device, then you'd want your L2ARC to be used for all data rather than just metadata.

Start with 6 x Seagate Exos X 20TB (RaidZ2) OR striped mirror, which I can start with one pair of 20TB disks.
What is your recommendation?
6 wide RaidZ2 and striped Mirror do not have a big difference in the usable space. The expandability of the Mirror and starting with a just one pair of disks would be attractive, because it will probably take me quite a while to fill even one pair. RaidZ2, on the other hand, would bring more peace of mind.
Mirrors tend to have the best combination of performance, reliability, and flexibility. I generally only use RAIDZs when I am trying to absolutely maximize capacity, and I would use a RAIDZ2 at the very least if I'm using old used drives.

Servethehome blog recommending the WD Blue NVME drive (~35€ Amazon.de) I prefer a widely and well-known brands
You don't necessarily need a separate boot device. I'd spend a little more and get something nice that has PLP given what a small percentage of your overall build cost it represents.

Would prefer a one 24i HBA, LSI SAS 9305-24i could be a good option, but 12Gb/s is useless in my situation.
Why 24i? Given that you're starting with 6 disks, that would fit into an 8i HBA, and you can always use a SAS expander later. It's also cheaper than a 24i HBA.
 
  • Like
Reactions: nexox

nabsltd

Well-Known Member
Jan 26, 2022
552
391
63
You would want to have redundant drives for SLOG.
Only if it is a requirement to maintain performance until the drive can be replaced. If SLOG fails, ZFS moves the ZIL back to the main drives of the pool. SLOG is only read from if the system has some sort of unexpected shutdown...otherwise, it's a write-only device.

Since most NVMe drives don't work well with hot swap, replacing a failed SLOG will require a system shutdown anyway.
 

nexox

Well-Known Member
May 3, 2023
1,208
557
113
Hmm, I hadn't seen that detail about the 4112 using less than the stated TDP, maybe for $10 it would be worth comparing one to the 4114 I am using in my fileserver build...
I did some idle power testing, switching to a Bronze 3204 made no difference, adding a second 4114 only increased idle consumption by 20W. I don't think there's a whole lot to be gained by testing the 4112.
 

itronin

Well-Known Member
Nov 24, 2018
1,288
856
113
Denver, Colorado
haha. Lots of differing opinions on things. Also might have been some FUD thrown in above (the death of CORE is being hyped by some, but that was not how I read things ) - whoops I digress and don't want to crap on the thread:

Scale vs CORE

Used to be that the sharing performance of Scale was subpar to CORE. Not really the case any more. Same was said for ARC but that's really notably improved recently. Are you familiar with Linux? FreeBSD? pick the platform that works better for you. There is and (I'm pretty sure) will be (for a while) an easy upgrade path from CORE to Scale if you want to start with CORE. FWIW &IMO, the VM host side of things is bit more flexible in Scale than Core and will only improve in Scale vs. being pretty static in CORE. Last 2 cents worth this topic: As a platform either Scale or Core, the more you do outside the GUI the jmore likely you are to get into trouble (think stability) with your system so consider carefully the system's ultimate use and how you intend to sys admin the server.

ARC & L2ARC

ARC and L2ARC, generally stated best practice is MAX your memory for ARC before your look at L2ARC due to the main memory "cost" of in-memory data structures needed to manage L2ARC. Apply common sense to this item.

Oversimplifying a bit: ARC (and L2ARC) are going to really speed things up if you can capture the bulk of your ACTIVE ON DISK WORKING SET in RAM. Working Set == that's all the data / files / contents in use at any given time.

Example: you have 5 different movies being streamed off storage, say each is 2GB (1080p HQ HEVC) in size, read ahead caching says oh suck the whole file in memory. that's 5 movies. 5 people concurrently doing this so 10GB? Based on your system build, likely that TNC/S will eat 75% of your memory for ARC. uhmmm. Okay what's the need for offboard (L2) ARC? let's 5x that example... but still only 50GB out of 140GB of ARC used RAM in your system? Do you need L2ARC? NB: Want & Need !=. If you want it great. Will it make your system serve files faster?

But wait... Your only going to go with 1Gbps NIC's to begin(!) with. I'd claim you could service that working set direct from Disk even if you didn't have a huge amount of memory laying about for use by ARC becase your initial storage pool at 1x6 RZ2 will serve at 1Gbps direct from disk so that's your service cap (really bottleneck) and not your storage to nic transfer rate.

Memory
Want more ARC? Instead of 32GB RDIMM's, start with 64GB RDIMM's. Just buy 3 to begin with - should be about the same cost (okay maybe a little more expensive) used. Watch your ARC stats in your newly built system. Add 3 more if you think it will help your ARC performance. That way you aren't trying to find a buyer for your 32GB sticks when you decide you want to double your RAM... :)

Motherboard
There is an X11SP* variant that has both an onboard SAS3 HBA *and* dual 10Gbe SFP+ ports. Since you don't have the motherboard yet, you might think about that as your choice? You get 10gbe out of the gate but NO onboard 1Gbps. X11SPL is a great and flexible board. everything else I talk about will work - you're just adding cards for functionality instead of having it on the motherboard.

SAS
+1 Go with SAS3 HBA (8i) and Expander since your drive backplanes are direct connect.

Boot Device (Mirror)

Others said it (and reality of availability & cost in EU different than US) but please don't burn an NVME for boot. Its a NAS and honestly probably more production than LAB. You shouldn't be booting it that often and you're realistically talking about a 15-30 seconds time savings. Your boot time is going to be spent more on spinny disk enumeration and pool import (especially as you add more vdevs). SM Superdom is awesome as stated previously, no physical footprint to speak of, pretty much fast enough. Second choice in my book would be a used enterprise grade SATA disk, typically about $10-15USD - you don't need much, in fact most space will be wasted. However I would ABSOLUTELY mirror your boot device.
I think that Chassis (not far from Norco 24 bay design) has a little shelf (either included or option) for a pair of 2.5" drives. intel DC S3500 80GB is plenty big enough.

GPU
Make sure your motherboard has a physical x16 slot (x8 data is fine but go with x16 physical) - see linked board above so you can add in a transcoding GPU. Nvidia P2xxx is still great, power from slot, handles all but av1 pretty well. You can also look at newer Intel ARC cards. Really lots of options. Also transcoding does not need all x16 lanes - just need x16 physical for the gpu. x8 is more than enough. Alternatively if all your playback devices can direct play any format you won't need to worry about transcoding.

Offboard transcoding.
A little N100 based micro PC (nuc size) is more than cable of transcoding everything you need. Serve your file shares to that and run your media server baremetal. not much power or physical foot print required. the cost may not be much more than an GPU to transcode, and in some cases (depending on GPU/featureset) may in fact cost LESS than the GPU. Doing this gets rid of requiring an x16 slot for a gpu.

NVME
If you will be runnings dockers, containers, jails, vm's - run those off an nvme based ZFS pool: mirror or better.

Read the motherboard manual before you buy the motherboard to see how the m.2 slots are configured and whether they come off the CPU or PCH.
Look at what PCIE slots you might have and what options there for for enterprise or prosumer AIC or bifurcated m.2 carrier cards, bifurcated u.2 adapter cards etc. For AIC: Intel P3605 1.6TB, Optane 905P 960GB or 1.5TB, all great options and they are pretty simple to install and use. bifurcated cards can be a bit tricksy for some folks (sometimes).

Lots of opinions, many differet ways to go.

Don't let decision paralysis get in your way, pick what makes the most sense to you and if you can, think about what your use case looks like in 6,12,18 months so that you have room to grow without majorly changing your system configuration.

don't forget to have fun along the way too!
 

mattventura

Well-Known Member
Nov 9, 2022
501
254
63
Only if it is a requirement to maintain performance until the drive can be replaced. If SLOG fails, ZFS moves the ZIL back to the main drives of the pool. SLOG is only read from if the system has some sort of unexpected shutdown...otherwise, it's a write-only device.

Since most NVMe drives don't work well with hot swap, replacing a failed SLOG will require a system shutdown anyway.
It's more of a risk if you have some kind of outage that both kills the drive and crashes the host at the same time. Though that can also happen to multiple drives in the same system.
 

GeeK

New Member
May 6, 2023
7
2
3
Thanks for all the replies!

There are so many messages that I will try to answer them all in general at once and tell more about my own thoughts related to them.

MB
by changing X11SPL-F to X11SPH-nCTPF, the price more than doubles when bought new (450€ -> 980€), so I don't think this is a good option. The availability also seems to be quite non-existent.

SLOG
I will think more about the necessity of SLOG when the server is operational. I will probably end up with the P1600X Optane drives as they are relatively cheap. Probably in a mirror configuration using a Supermicro pcie bifurcation card. I'm still interested to hearing other suggestions, but this isn't high on the priority list right now.

L2ARC
I will think more about the necessity of L2ARC when the server is operational. If I end up not using the special device VDEV, L2ARC could perhaps slightly speed up the search for metadata (in metadataonly mode) from the rust vdev(s), so browsing large image folders, for example, could be more snappy. I'm still interested to hearing suggestions, but this isn't high on the priority list right now.

Special VDEV
This is perhaps the most interesting topic at the moment in terms of build progress. I would like device recommendations, as well as personal experiences. Adding a special VDEV would be the simplest when the pool is completely empty. If you use Special device, what hardware do you use?
I would prefer Supermicro pcie bifurcation card and NVME size format.

Boot
This resulted in many suggestions. SataDOM would be an interesting option, but for that price you could get several NVME or Sata SSD disks. Is it worth it? My goal is to minimize cable jungle inside the case so using Sata SSD's is not my first option. For this reason, Supermicro pcie bifurcation card or NVME in motherboard slot would be the priority. Did I understand correctly that the NVME interface on the motherboard does not "use" the available PCIE connections? If so, this would sound like the best option for the boot drive?

HBA
8i HBA is enough for current needs, that's true. The idea of 24i would be that I could use all the backplanes of the case at once and thus distribute the VDEV to them evenly. In this case, the failure of one backplate does not take the entire pool with it.
Of course, this can be implemented with e.g. 8i HBA + Expander, but I would like to save pcie ports as much as possible. Believe it or not, I also think about power consumption. I would think a single 24i HBA would be more energy efficient than an 8i + expander.

10 GbE NIC
Maybe I'll skip the 10GBe completely and go directly to the 25GBe NIC if there isn't a big difference in the price. I have to study the options more closely when this is relevant. Thanks for the suggestion!

OVERKILL
I know this build is a bit overkill for its intended use. (pure storage server)
However, it's about a hobby, not so much about how to get thing x done as cheaply and with as little power consumption as possible.
The main components do not cost much more than consumer-grade significantly slower hardware. For example, the CPU and 196 GB of ECC RAM cost a total of 300€.

In addition to this, I live in northern Europe, where it is cold outside for more than half of the year. The server rack is located in an outdoor warehouse, which is heated by direct electric heating. During the winter, the use of the server is practically free, when the warehouse is heated at least partially by the waste heat of the rack. Electricity is also not particularly expensive in the area where I live, around 0.1€/kwh.
Despite this, I have tried to choose components with as little power consumption as possible from the various overkill options.
 
Last edited:

nexox

Well-Known Member
May 3, 2023
1,208
557
113
SataDOM would be an interesting option, but for that price you could get several NVME or Sata SSD disks.
I know prices vary globally but these are usually not that expensive, I got a couple barely used for $35US each.