Review of TrueNAS Build

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

gauntface

New Member
Dec 21, 2021
2
0
1
Hi all,

I've never set up a server of any kind and was contemplating synology vs diy and landed on mini because I hope I'd get more for my money.

- I'm planning on using TrueNAS Scale
- I only need 4TB of usable space to start but I suspect this to grow over time
- I have a Samsung SSD I have available. I'm not sure if it'll work with this motherboard and suspect it's overkill for this but I'll give it a try
- Primary use will be NAS and media server

I'd love any input on whether this build looks ok or not. The motherboard (X10SDV-4C-TLN2F) was picked because its available on ebay for $250.

 

Parallax

Active Member
Nov 8, 2020
420
212
43
London, UK
This hardware configuration is more than fine for a NAS server unless you have extraordinary requirements.

The question is, what does your "media" requirement entail and how do you plan to operate it? Are you going to want to run applications natively on the NAS? Or do you plan to deploy a Docker environment and run a bunch of containers? Do you want to play with Kubernetes (henceforth "k8s")?

I ask because having gone through this very recently with TrueNAS Scale, the NAS side is really nice and the container side I found abysmal. It's based on k8s and therefore the majority of stuff you will look up on the Internet to help you set up your system will be utterly useless. There are a few pre-packaged installs like Plex and whatnot, but the whole interface to get them up and running is not user friendly (yet? Ever?). There's no easy way to set up and install Docker. If you do want to play with k8s then you will find IX System's implementation is non-standard so you're not learning anything you can apply elsewhere (eg professionally), and if you want something with a nice GUI for k8s like Portainer or Rancher you'll have to hack them on to it, and even then you will have to modify Helm charts to get new container installs to work. You will also not easily be able to run applications natively on the host, you will mostly need to install them as containers which may or may not be a limitation for you. And the VM capability of Scale is not available yet ("soon").

It's perhaps a small point, but under TrueNAS the left over space on your SSD will not be useable without some fairly substantial fiddling around. This is not bad if it was a tiny one, like 32GB, but you are going to be pretty peeved at the waste of ~470GB of space on your expensive boot drive as it stands.

Personally I would look at Open Media Vault (OMV) - it's very barebones and not very pretty, but it gets the job done, supports ZFS if that floats your boat, and runs mostly standard Debian under the hood. You can add Docker through omv-extras and even KVM if you're keen. If you want just pure NAS functionality then you can use TrueNAS Scale (or Core, why not?) and then I'd suggest you run a small-ish server for your Docker environment and whatnot. Or you can install Proxmox on the server, share the drives from it, and build either a VM or a LXC to run Docker et al, I did this for a while.

Apologies if you know all this, but if you didn't, I wanted you to know (in my opinionated view!) what you were getting into first.
 

Parallax

Active Member
Nov 8, 2020
420
212
43
London, UK
I should add that although I am far from any kind of expert, I do run a five server k8s cluster for home lab purposes and my home environment is run in another 30 or so containers under Docker. So for me to hate the TueNAS Scale container environment so much after 48 hours was an achievement. If you wander over to their forums you'll see me and some other guy complaining about all the above, and in my case at least (rightly or wrongly) wondering out loud who the target audience for TrueNAS Scale as currently architected actually is. I've been responsible for product strategy and engineering at various places for, um, over 30 years now so I can't help but pick at the seams of things.
 

itronin

Well-Known Member
Nov 24, 2018
1,244
804
113
Denver, Colorado
Parallax raises a number of excellent points, I don't necessarily agree with all of them but they are certainly valid from a POV.

I will dive a bit into the hardware, having spent a few months building up a few systems based on this board and in small ITX cases. My use cases were relatively well defined though and honestly I'm thinking you may not have thought through all of yours and that is okay! Build, give yourself room to grow and if possible make design decisions that don't hamstring you in 6-12 months (and cost you a bunch of wasted money/time).

X10SDV-4C-TLN2F is a good sound choice for a basic NAS with a light virtualization load.
Buy a board off the bay that has an active CPU cooler *or* be prepared to aftermarket mod something that is either very large (chassis constraints) or has active cooling.
Read the manual.
With your listed hardware choices you will use the 24 pin ATX style power plug on the motherboard.
DO NOT use the 4 pin power plug that looks like a cpu power plug.
If you use all 6 sata ports you will drop a pice 3.0 x1 lane from your m.2 (see the manual).
If you run 10Gbe on this board then that large heat sink behind the vga port will get very hot and likely need higher air flow than you planned.
If you run 1Gbe you'll be fine.
The motherboard supports pcie bifurcation on the x16 slot. There are some exotic risers you can consider to do add more hardware.

Memory
If you are buying the motherboard from the bay, look there for memory too.
Don't buy 8GB sticks. Look at either 2x16GB sticks or 2x 32GB.
You could also start with a single stick of 16 or 32 with a drop in performance and add another one later based on funds availability.

Storage
If you build it you will fill it. I've been telling this to folks for 25 years.
If you think your initial target is 4TB usable then build for 6 or 8TB day 1. If your initial target is 2TB stick with 4TB.
If you want new drives with warranty great.
Used 4TB SATA or SAS drives are a little more than 1/3 of the cost your new drives. Just want to point this out.
Used 8TB SATA or SAS drives are easily the same cost as your new 4TB IWP.
Whether new or used do plan on running badblocks on your drives as soon as you get them. ie. build and test server first, then buy drives.
NVME - don't waste this on boot device -esp if you are going to run Scale or Core, or Prox.
Thinking about having a cold shelf spare drive for your main storage and one for your boot drives.

Case
if it works for you great!
If you don't need hot swap look at Core's Node series.
There is also nothing wrong with putting an ITX board in an mATX or mid-tower case.

Power
for me whatever works, has the right number and types of connections, is relatively quiet, has a good reliability history, and fits the budget.

what you didn't list or mention

Chassis build out
Get all the hardware you need to fully connect (data and power) all the bays in your chassis whether you fully fill the bays or not.

Boot
How important is it that this server is up? 24/7 and people get mad if it goes down? plan on a software raid boot pool.
You can configure that with TNS or core, and plain jane linux distros.
Want new boot drives? something like Inland (US) 120GB SSD are about $20.00 each.
Used? You can get intel DC S35xx 80or120GB for that too.

HBA
You have 5 SATA ports, 6 if you burn an m2. pcie lane. Your case supports qty 8 3.5 "drives and qty 4 2.5" drives.

Backup
Do you already have a plan to backup this server?
If not build it into your budget.
Plan on backups running on day two so you don't procrastinate it.

Media Server and this board.
If your media is already transcoded to what you want - great (ie. pass through playback)
I don't recommend trying to transcode using this board. Use the pcie slot for an HBA, not GPU, or use bifurcation to get both.
If you need transcoding then look at a TMM as a bare-metal plex,emby,jellyfin (or whatever your media management is) server

VM's and containers on nvme
Putting critical vms or containers on an unprotected nvme is asking for trouble. See backup.
Better, figure out how to configure mirrored nvme for vm's and containers.

don't get hit by decision paralysis - at the same time from personal experience I can tell you it is frustrating to order a bunch of stuff and then start putting it together and trying to use it and think DRATS I should have done XYZ.
 

zer0sum

Well-Known Member
Mar 8, 2013
850
475
63
You should look at UNraid :)

It is very flexible with disks and can use your ssd as a cache drive.
The docker app store is incredible and you can spin up new apps in minutes.
And you can also run full blown virtual machines if you need something a bit heavier

1640190425671.png
 

Parallax

Active Member
Nov 8, 2020
420
212
43
London, UK
You should look at UNraid :)
I'm not the OP, but Unraid is the sort of direction I thought TrueNAS Scale was going to go. The project actually I'm most excited about at the moment is Harvester HCI, but it does assume you really do want to run all your containers (and VMs!) in k8s. It's a bit rough around the edges, but it's such a pure expression of where things are going I can't help but admire it. And Rancher and Longhorn are very good.
 
  • Like
Reactions: itronin

UhClem

just another Bozo on the bus
Jun 26, 2012
443
255
63
NH, USA
... X10SDV-4C-TLN2F is a good sound choice ...
If you use all 6 sata ports you will drop a pice 3.0 x1 lane from your m.2 (see the manual).
And, won't this mean that the m.2 slot will then be x2 lanes?? since the PCIe spec (supposedly) says that each link is a power-of-two #lanes.
@itronin , if you do the experiment, pls follow-up [or PM]; tnx
 

itronin

Well-Known Member
Nov 24, 2018
1,244
804
113
Denver, Colorado
And, won't this mean that the m.2 slot will then be x2 lanes?? since the PCIe spec (supposedly) says that each link is a power-of-two #lanes.
@itronin , if you do the experiment, pls follow-up [or PM]; tnx
@gauntface since this is germane to your OP, here's the block diagram from the X10SDV manual

Screen Shot 2021-12-22 at 11.56.31 AM.png

I did not collect stats but I did "feel" a difference using an Optane 900P with and without using Sata0#1 (SM superdom). For me it only matters with one of the systems I built. The rest have an Optane 800P in the m.2 and that is only x2 which is why I am currently using superdoms in those systems. ;)

@gauntface you might also take a look at this thread by @maes which documents a build he did using another variant of your proposed motherboard. It really shows the extent to which you can configure and push the type of configuration you are looking at. I think its fun and idea provoking to look at how others turn the components I'm looking at.

EDIT - I feel I need to add that I am NOT encouraging or suggesting you to attempt a UNAS NSC-800 as your first build. Its a PITA to work in, not the greatest case and I personally only built a couple of systems in that chassis because I "got a deal" on them. For a first time server build you might consider limiting yourself to cases that are easy to work on (which may exclude some of the ITX Cases you may be looking at). This is why in my first reply I mentioned you can install an ITX motherboard in an mATX case or a mid-tower.
 
Last edited:
  • Like
Reactions: Myocardial

gauntface

New Member
Dec 21, 2021
2
0
1
Thanks for all the info all, it's super helpful.

I've been balancing out synology vs DIY and I keep coming back to DIY just because of the flexibility in the long term, but it comes with all the risks of mistakes / managing issues.

I am considering unraid as well.

My original plan for media was to use plex to stream some 1080p video files. some of the content will probably end up being transcoded.

It seems like the NVMe is a bad idea and just using a 2.5 SSD would be better.I'm likely to have this set up as something that is switched off each night and back up was planning to use backblaze.
 

ddaenen1

Member
Jul 7, 2020
41
8
8
Thanks for all the info all, it's super helpful.

I've been balancing out synology vs DIY and I keep coming back to DIY just because of the flexibility in the long term, but it comes with all the risks of mistakes / managing issues.

I am considering unraid as well.

My original plan for media was to use plex to stream some 1080p video files. some of the content will probably end up being transcoded.

It seems like the NVMe is a bad idea and just using a 2.5 SSD would be better.I'm likely to have this set up as something that is switched off each night and back up was planning to use backblaze.
What speaks against TrueNAS Core for you? It works great and is extremely solid and reliable. I run Plex and Nextcloud on this without any issue. I have also been looking at TrueNAS Scale but i fear it is not in a place yet where i would want it for home usage with the family being able to use Plex as easy as for example Netflix or Disney+ on our Samsung TV. Core does exactly that. Plex "just runs".
 

Parallax

Active Member
Nov 8, 2020
420
212
43
London, UK
I've been balancing out synology vs DIY and I keep coming back to DIY just because of the flexibility in the long term, but it comes with all the risks of mistakes / managing issues.
The main trouble with the Synology-style boxes is the CPU tends to be underpowered, especially for the money you pay. It's a hack and in a legal grey area but you could try Xpenology - even temporarily to get a feel - that would allow you to get the simplicity of the Synology UI on a server you've built yourself. I left Xpenology when it started getting tricky to run the loader software on my hardware, but for evaluation purposes you don't need the latest and greatest.

Just a thought.
 

Parallax

Active Member
Nov 8, 2020
420
212
43
London, UK
What speaks against TrueNAS Core for you? It works great and is extremely solid and reliable.
I'm not the OP, but there are two issues for me with Core, both shortcomings on my side rather than any issue with Core per se - one is that nearly everything I have at home is running Debian and having to keep my hand in on FreeBSD feels like extra work and I don't need that. The second is that I really need to run a few VMs on my "NAS" box and I'm a lot more familiar and happy with KVM or ESXi than I am with jails.

I tried Scale out but the k8s implementation was too non-standard and clunky for me. I run a k8s cluster at home so this shortcoming I put on Scale rather than me. ;) Presumably it will improve over time, but unless they have a major change in approach I think it's still not for me architecturally - I'm not going to change all my virtualisation environment(s) to Scale to get them interoperable.
 

RageBone

Active Member
Jul 11, 2017
618
160
43
Truenas Core can do actual VMs, so not just jails.
And you don't need to frickle around with another "freebsd" because it should be an appliance that you don't need to keep a hand on.