Help needed with NAS build

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

guiniol

Member
Oct 11, 2024
74
13
8
Hey all!

I have the beginning of a parts list but I haven't built in a long time and I want to see if there are better options.

So, I am building a NAS that will also run Immich (over Tailscale, so Tailscale too). Nothing else. The NAS will contain 95% pictures and videos, uploaded through Immich and PCs. It will sit under a desk and needs to be silent.

Starting configuration is 5x4TB SSDs with in RAIDz2 (unless I find a really good deal on larger SSDs), and I want room to add more disks if needed (especially now that ZFS supports expanding vdevs) + an M.2 SSD for the boot drive. I also want ECC and to be able to add a SFP28 NIC in there.

I started to build around the AM4 platform and this is what I have (based on what's readily available locally):
  • Motherboard: Asrock B550 Pro (Asrock B550M Pro is the micro-ATX equivalent depending on the case I go for).
  • CPU: AMD Ryzen 5 Pro 5650G
  • 2x32G ECC UDIMMs

My main questions are:
  • Platform: should I be looking at AM5? Or Intel?
  • Motherboard: I looked for Motherboards with at least 6 SATA connections and ECC support. I saw some Asrock Rack MB but those are significantly more expensive. I also found one with a 10Gbe (or maybe 2.5Gbe) port, which would let me start without the SPF28 NIC (which I won't need for a few months), but it was more expensive than the models I listed above + a ConnectX4.
  • Case: I am hesitating between:
    • Node 804: micro-ATX, simple and cheap, but I would honestly prefer full size ATX
    • Jonsbo N5: fits the bill, except for the 3.5" bays that I'd need adapter for, and I have no idea if I should be worried about the longevity of the backplane. Also very expensive
    • What other case should I be looking at? Needs to be as innocuous as possible, and I think the "cube" form factor does that better than the tower one.
  • SSDs: it seems M.2 SSDs are even cheaper than SATA ones. Should I be going for that? There must be extension cards to add M.2 slots. That would open up the choices to Mobo with fewer than 6 SATA but force full size ATX so that I also have another slot for the SFP28 NIC.
Other questions:
  • What sites do people use to share builds? I used to use pcpartspicker.com, but it was missing the first few components I tried to add... and I couldn't find an equivalent site.
  • How do you deploy your machines? I use ansible to manage my dotfiles, but I want something that will provision the whole system this time. Started looking at NixOS, and I'm wondering what others are using.
  • IP-KVMs: assuming I don't get a board with a BMC, I could attach a PiKVM and turn it on when I need it, yes? (so that it doesn't draw an extra 5W all the time...)
 
Last edited:
  • Like
Reactions: homeserver78

guiniol

Member
Oct 11, 2024
74
13
8
  • SSDs: it seems M.2 SSDs are even cheaper than SATA ones. Should I be going for that? There must be extension cards to add M.2 slots. That would open up the choices to Mobo with fewer than 6 SATA but force full size ATX so that I also have another slot for the SFP28 NIC.
I can answer that part as it was a last addition I had not researched. M.2 SSDs are cheaper, but getting the PCI lanes to use them is expensive because it requires a server processor. So no, not really in my budget.
 

Tech Junky

Active Member
Oct 26, 2023
711
240
43
You can get boards with 5-6 sockets in them but, beyond the first two you're running them through the chipset. Intel chipset provides x8 while AMD is x4.
 

louie1961

Active Member
May 15, 2023
379
167
43
Here's what I just built, it is based on the 45 Home Labs HL8, but I don't need hot swappable hard drives, so I elected to save some money and not pay for their case or back plane. I also went AM4 to not spend so much on memory and a processor.

Gigabyte B550i Aorus Pro AX motherboard
64 GB of DDR43200 ECC ram (NEMIX)
AMD Ryzen 5 Pro 5650GE (35 watt CPU to hopefully save money on electricity)
2X Patriot P210 256GB drives in a mirror for the OS (TrueNAS scale)
4X Samsung SM863a 960GB enterprise SSDs
2X Seagate Ironwolf ST4000VN006 4TB
1X Teamgroup MP44 1 TB NVMe drive
6 port M.2 NVMe to SATA Adapter
Corsair RM650 PSU
Fractal Design Node 304 case
10Gtek Intel X520-DA2 2 SFP+ 10 gbe NIC

I have the SSDs arranged in 2 mirrored VDEVs, and striped across VDEVs (sort of similar to a RAID 10, but using ZFS), and I have the spinners in a mirror with the NVMe used as a read cache. I don't have very much data to store, less than 1 TB total, so I went with smaller disks than most folks. I have the disks set up in two pools, one called Fast_SSD and one called Slow_HDD. The fast pool will be for things like Kubernetes and Docker storage, as well as iSCSI and NFS shares to Proxmox. The slow pool will be my pictures, important documents, etc. The boot drives and the spinners are connected to the M.2 to SATA adapter, the enterprise SSD drives are connected to the motherboard.

I have no trouble fitting all 8 drives in the case. But fitting the two boot drives took a little finagling. If I had it to over, I would probably buy a little smaller CPU cooler. The Noctua NH-L12Sx77 I went with makes the SATA connections on the drives a little difficult to access. I literally just finished the build tonight. I will do speed testing tomorrow. I hope that gives you some ideas.
 

Tech Junky

Active Member
Oct 26, 2023
711
240
43
little smaller CPU cooler
Easier to do the drives and such before mounting the cooler if the ports / sockets are close to it. Most primary M2 sockets tend to be right under the cooler due to the traces needing to be short for the highest bandwidth. The SATA ports tend to be off to one corner though w/ maybe a straggler across the board running off another controller. It can be a pain sometimes though depending on how often you need access. I fix this issue with a graphite pad under the cooler instead of paste to not have to deal with that side of things when popping it off to get access.
 

louie1961

Active Member
May 15, 2023
379
167
43
The cooler is actually up against the drive cages, making access to the sata connections on the drives difficult to reach.
 

guiniol

Member
Oct 11, 2024
74
13
8
You can get boards with 5-6 sockets in them but, beyond the first two you're running them through the chipset. Intel chipset provides x8 while AMD is x4.
But then you're limited by the bandwidth from the chipset to the CPU (4 PCI-E lanes I believe), no?

Gigabyte B550i Aorus Pro AX motherboard
64 GB of DDR43200 ECC ram (NEMIX)
AMD Ryzen 5 Pro 5650GE (35 watt CPU to hopefully save money on electricity)
2X Patriot P210 256GB drives in a mirror for the OS (TrueNAS scale)
4X Samsung SM863a 960GB enterprise SSDs
2X Seagate Ironwolf ST4000VN006 4TB
1X Teamgroup MP44 1 TB NVMe drive
6 port M.2 NVMe to SATA Adapter
Corsair RM650 PSU
Fractal Design Node 304 case
10Gtek Intel X520-DA2 2 SFP+ 10 gbe NIC
Thanks for the build. That's super close to what I am settling on. I do have a few questions comments:

1. PCI-E lane juggling is the main exercise. So, I would guess:
  1. The MP44 in the M.2 slot connected to the chipset
  2. The M.2 to SATA adapter in the M.2 slot connected to the CPU, with the 4x SM863a plugged in there
  3. The two P210 and the two Ironwolf on the SATA ports
  4. And the NIC in the PCIEx16 slot
So that wastes 8 PCIE lanes (since the NIC is only x8). I am looking at larger board to see if I could spread those lanes on two slots so that I could have a NIC and an HBA (or go with a non G version to get PCIE Gen4 and have a Arc GPU or whatever in the second slot).

2. Could you share the total cost for your build? Not sure if it translates since I am in the EU.
3. Where did you find the GE version? The ones I find on eBay are quite expensive (but probably a EU thing too).
 

Tech Junky

Active Member
Oct 26, 2023
711
240
43
Intel chipset provides x8 (DMI 4) while AMD is x4. "600/800 series"

NIC is only x8
Depending on the generation it might work just fine in a newer board w/ a short slot i.e. x8. The newer cards for 10ge can be as small as a x1 for a gen4 slot and x4 for a dual port.

Sometimes it's worth the upgrade to save in the long run and free up a higher priority slot. I've been looking at 10ge cards with the thought og upgrading wifi to 7 which with a single client should hit or exceed 5ge. Still waiting on some linux work to fire up the M2 I have in the box right now that I picked up this time last year with the intent of hostap''ing it as a 7 AP. If it work well like AC used to then it's a $35 AP instead of $500.

non G version
Either of them will work and have a GPU for minimal output i.e. non gaming. I run the 7900X and before diving into the ARC for processing it worked just fine for basic stuff. Since it's a headless box there's not much I would be doing graphically other than a OS rebuild.
 

louie1961

Active Member
May 15, 2023
379
167
43
Thanks for the build. That's super close to what I am settling on. I do have a few questions comments:

1. PCI-E lane juggling is the main exercise. So, I would guess:
  1. The MP44 in the M.2 slot connected to the chipset
  2. The M.2 to SATA adapter in the M.2 slot connected to the CPU, with the 4x SM863a plugged in there
  3. The two P210 and the two Ironwolf on the SATA ports
  4. And the NIC in the PCIEx16 slot
So that wastes 8 PCIE lanes (since the NIC is only x8). I am looking at larger board to see if I could spread those lanes on two slots so that I could have a NIC and an HBA (or go with a non G version to get PCIE Gen4 and have a Arc GPU or whatever in the second slot).

2. Could you share the total cost for your build? Not sure if it translates since I am in the EU.
3. Where did you find the GE version? The ones I find on eBay are quite expensive (but probably a EU thing too).
PCIe juggling?- not really, the board is already laid out with which PCIe lanes go where, not using one of the M.2 slots doesn't free up the PCIe lanes for something else, like a X16 slot.
1. Yes, the MP44 is on the back of the board in the second M.2 slot
2. The SATA adapter is on the front of the board, but because the SATA through put drops when you have more than 3 or 4 drives plugged in, I put the slower spinners there as well as the OS boot pool, since speed won't matter for those drives. The flash drives are on the motherboard SATA plugs, and if I chose to use hardware raid, and can do that with the flash drives, as the board supports it
3. Nope, on the adapter
4. Correct, the NIC is in the single PCIe slot

Total cost without drives was $742. I had some drives already. In Euro I would guess that is about 800 euro. I bought everything but the CPU from Newegg, which I am pretty sure operates in the EU

The Ryzen 5 Pro 5650GE was found on ebay as a new processor. It came out of Turkey believe it or not. It cost $120. In retrospect, having played with this build for a few days now, I probably went overboard with this processor and with the amount of RAM (64gb). If I was trying to be more budget conscious, I would probably be well served with an older Ryzen 5 Pro 4650 or even a Ryzen 3 Pro. Even with several docker containers running the CPU utilization barely moves. And I only have a total of 8 TB of storage in this box, so the ARC is barely consuming any memory. I am running TrueNAS Scale, electric eel RC2.
 
Last edited:

homeserver78

Member
Nov 7, 2023
94
57
18
Sweden
I looked for Motherboards with at least 6 SATA connections and ECC support. I saw some Asrock Rack MB but those are significantly more expensive. I also found one with a 10Gbe (or maybe 2.5Gbe) port, which would let me start without the SPF28 NIC (which I won't need for a few months), but it was more expensive than the models I listed above + a ConnectX4.
Check out the ASRock B550m Steel Legend: ECC support (UDIMM of course), 6 SATA, 2.5 GbE, and an x4 PCIe slot (via chipset). Only issue as I see it is that there is no up-to-date BIOS version that is not marked as "beta", whatever that means.
 

louie1961

Active Member
May 15, 2023
379
167
43
Check out the ASRock B550m Steel Legend: ECC support (UDIMM of course), 6 SATA, 2.5 GbE, and an x4 PCIe slot (via chipset). Only issue as I see it is that there is no up-to-date BIOS version that is not marked as "beta", whatever that means.
I looked at that board, but the extra SATA slots make the second M.2 slot unavailable. You can use SATA ports 3, 5, and 6 or you can use the second M.2 slot, but not both. That was a deal breaker for me plus I wanted an ITX sized board.
 

guiniol

Member
Oct 11, 2024
74
13
8
I looked at that board, but the extra SATA slots make the second M.2 slot unavailable. You can use SATA ports 3, 5, and 6 or you can use the second M.2 slot, but not both. That was a deal breaker for me plus I wanted an ITX sized board.
That's what I meant by juggling lanes. Do you get more performance from the M.2 to 6 SATA adapter or just using the 3 SATA ports provided by the motherboard? (at least, that's a question in my case because I only plan on having 1 disk for the OS and 5 data drives for now (with plans to expand up to 8 as the data grows).
 

louie1961

Active Member
May 15, 2023
379
167
43
Just to be clear, there are 4 SATA ports, 2 M.2 slots, and 1 16X pcie slot on the board, total. Not sure if "3 SATA" was a typo or you were operating under the assumption the board only has 3 SATA slots.

As to the M.2 to PCI adapter, it uses 2 PCIE lanes, affording it a total of 16gb of bandwidth. On paper, that would be enough for 3 6gb SATA ports. In practice, according to some of the reviews I have seen, you can run 4 drives at max throughput no problem, but all of the drives start to throttle back when you add drives number 5 and 6 (the adapter I have is the ASM1166 chip with SATA ports). So I put my boot drives on the M.2 adapter, along with the 2 spinning HDDs. My assumption is that the throughput required for the boot drives will be very low. And the spinning drives can't max out a SATA 3 connector. So all totaled, those 4 drives shouldn't be bottle necked on M.2 dapter. I kept my 4 SSD drives on the 4 motherboard SATA ports, as I mentioned earlier, because they will highest throughput. I think if you put in all spinning drives, you won't have an issue either way.

Alternatively if you want to give up the 10gbe NIC, you could use the built in 2.5gbe NIC on the motherboard, and put in a SAS HBA in the x16 slot.
 

guiniol

Member
Oct 11, 2024
74
13
8
Just to be clear, there are 4 SATA ports, 2 M.2 slots, and 1 16X pcie slot on the board, total. Not sure if "3 SATA" was a typo or you were operating under the assumption the board only has 3 SATA slots.

As to the M.2 to PCI adapter, it uses 2 PCIE lanes, affording it a total of 16gb of bandwidth. On paper, that would be enough for 3 6gb SATA ports. In practice, according to some of the reviews I have seen, you can run 4 drives at max throughput no problem, but all of the drives start to throttle back when you add drives number 5 and 6 (the adapter I have is the ASM1166 chip with SATA ports). So I put my boot drives on the M.2 adapter, along with the 2 spinning HDDs. My assumption is that the throughput required for the boot drives will be very low. And the spinning drives can't max out a SATA 3 connector. So all totaled, those 4 drives shouldn't be bottle necked on M.2 dapter. I kept my 4 SSD drives on the 4 motherboard SATA ports, as I mentioned earlier, because they will highest throughput. I think if you put in all spinning drives, you won't have an issue either way.

Alternatively if you want to give up the 10gbe NIC, you could use the built in 2.5gbe NIC on the motherboard, and put in a SAS HBA in the x16 slot.
The thing I am trying to wrap my head around is how much bandwidth you get for each drive and the NIC. PCI-E Gen 4 would be nice, but that needs a non-G CPU and that means adding a GPU which also takes lanes. Maybe that only needs a x1 slot if I don't care about the perfs? I am planning all SSDs (but SATA because M.2 is even more lanes :D), but I don't understand what to expect from the SATA ports provided by the chipset. X570 can provide 9! but still only has 4 PCI-E lanes to the CPU. How does that work?
 

guiniol

Member
Oct 11, 2024
74
13
8
So... this is what I got (reusing the motherboard your motherboard since it allows full speed to the 2 M.2 slots while using the 4 SATA ports and the PCI-E x16 slot):

APU Build
  • Ryzen 5 Pro 5650G(E?)
  • Gigabyte B550I AORUS PRO AX 1.0 (not actually available locally, so need to find a replacement).
    • 4 SATA + 2x PCI-Ex4 M.2
  • 2x32G ECC RAM
  • 1 M.2 to 4/6 SATA
  • PCI-E NIC (SFP28)
This gives the boot SSD on one M.2 slot, and the up to 8 disks on 4 SATA ports and up to 4 M2. to SATA converted ports.

CPU+GPU build
  • Ryzen 5 5600 (for ECC + PCI-E Gen4)
  • Asrock X570 Steel Legend
  • 2 x32G ECC RAM
  • PCI-E NIC (SFP28) (in the PCI-E x16)
  • ARC A310 GPU (in the PCI-E x4 slot)
Would this even give theoretical better perf? The 8 SATA SSDs would go through the chipset, but that would be connected via PCI-E Gen4 which is faster? Is it worth the increased budget?
 

guiniol

Member
Oct 11, 2024
74
13
8
This is what I currently have (with the list of all the builds I am considering here):
ROG STRIX B550-A GAMINGCHF 110.00
Ryzen 5 Pro 5650GECHF 150.00
2*32GB ECCCHF 200.00
M.2 1TBCHF 70.00
Connectx4-Lx 2*25GbCHF 50.00
CEACENT CNS44PE16 with 8*U.2 cablesCHF 180.00
Tower style coolerCHF 60.00
PSUCHF60.00
Jonsbo N5CHF 200.00
TOTALCHF 1080.00

Comments welcome.
 

name stolen

Active Member
Feb 20, 2018
119
35
28
Strix B550-A makes lane-maximizing (juggling) more difficult, if you're still after that, although it looks cool as hell (i had one but replaced it because of the lack of PCIE flexibility and the chipset being connected at Gen3 rather than Gen4). Strix B550-E (nope, not -F, that's the same as -A) or Crosshair X570 is what you're after if you want to be able to do x8/x4/x4 +4 for your CPU-connected PCIE lanes, and still have x8 (x4 in the slot and x4 m.2) available behind the chipset on the X570.
 

guiniol

Member
Oct 11, 2024
74
13
8
Strix B550-A makes lane-maximizing (juggling) more difficult, if you're still after that, although it looks cool as hell (i had one but replaced it because of the lack of PCIE flexibility and the chipset being connected at Gen3 rather than Gen4). Strix B550-E (nope, not -F, that's the same as -A) or Crosshair X570 is what you're after if you want to be able to do x8/x4/x4 +4 for your CPU-connected PCIE lanes, and still have x8 (x4 in the slot and x4 m.2) available behind the chipset on the X570.
Good to know, thanks. I honestly couldn't tell the difference between all those versions :D
 

guiniol

Member
Oct 11, 2024
74
13
8
I should have posted this earlier, but the build evolved quite a bit (over various threads in this forum and some offline discussions), and this is what I got in the end, with some last minute changes for what was actually available:

Asrock B550m Pro 4
Ryzen 5 Pro 5650GE
2x 32GB ECC RAM
Kioxia Exceria G2 1TB
Connectx4-Lx 2*25Gb
HDPLEX 250W GaN Passive
Streacom FC10 ST-FC10B-ALPHA
2x 3.5" to 2.5" brackets
PCI-E riser
2x SFP+ Transceiver
6x PM863A 3.84TB (SATA SSD)


Two big changes are:
1. The drives are SATA SSD (enterprise, but not U.2 etc) and connected to the SATA ports on the MB
2. The case is fully passive, and I had to add a heatpipe between the NIC and the side of the case (as if it were a GPU)

There is room for more drives in the chassis, though that will require inventivity, and a PCI-E slot for more expandability.