All-M.2 home server build (Asus W680, Broadcom HBA, IcyDock)?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Joss

New Member
Jan 21, 2024
13
7
3
My original idea for an Unraid home server/NAS build was to use SATA SSDs. But more & more turnkey all-M.2 devices are being released, so I began thinking about a DIY M.2 option too. I have three specific questions, but if you detect any other problems, let me know.

I would use gen3 M.2 SSDs, and I'd configure six of them in a RAIDz1 (total raw capacity: 24 TB). I would make them hotswappable with something like the IcyDock ToughArmor MB873MP-B V2:


(I'd replace the built-in fans with two hopefully more silent ones, e.g. by Noctua.)

As a motherboard I would use something like the Asus Pro WS W680-ACE (non-IPMI):


The board's first CPU-direct x8 PCIe slot would be for the fiber NIC, the second one would be used as part of the M.2 ZFS pool. I'd address four of the six M.2 SSDs with an HBA, and I think that the Broadcom HBA 9500-16i might be a good choice:


I would connect the HBA and the enclosure with two adapter cables from SlimSAS SFF-8654 8i to dual OCuLink SFF-8612 4i, e.g. the IcyDock MB206L-B:


First two questions are regarding the HBA:

(1) Obviously, if using gen4 M.2 SSDs, I would only achieve full speed (2 * x4) with two SSDs, because the HBA is PCIe 4.0 x8 upstream. But since I'd be using gen3 M.2 SSDs for the pool, and since the HBA has 16 (2 * 8) lanes downstream, I would be able to achieve full speed with four gen3 x4 M.2 SSDs. Or am I making a mistake here somewhere?

(2) The Broadcom HBA is sold by some Chinese vendors for (really!) cheap via Ebay. Do you have any experiences in this regard? Are these sources trustworthy? (Seems kinda strange tbh.)

I would address the remaining two M.2 SSDs for the ZFS pool via the two gen4 M.2 slots on the board, the ones that connect to the chipset, using two M.2 M-key to OCuLink adapter cards, e.g. the Delock 64106:


…and two OCuLink 4i cables. The adapters are gen4, but I'd use gen3 M.2 SSDs, of course, to match the other four.

Next question is in regard to the storage pool in general. So the ZFS pool (RAIDz1) would be created across the DMI, with four M.2 SSDs via the HBA (CPU-direct), and two via the chipset. Since I'd be using gen3 SSDs, the DMI (PCIe 4.0 x8) would be able to handle the bandwidth/speed, while still leaving room for more expansion down the road.

But are there any downsides to creating a storage pool across the DMI, with disks connected both to the CPU and the chipset?

As for the CPU-direct M.2 slot, I would not populate it at first, but maybe use it at some point for a gen4 cache drive, for data that is written and rewritten often.

I would install a seventh M.2 SSD (gen3) as an ext4 standalone unassigned hotswappable drive as the destination for my macOS Time Machine backups. For this I would use one of the chipset's PCIe 3.0 x4 slots (x16 physical) with e.g. the Delock 90482:


(This is a PCIe 4.0 card, but otherwise everything else, i.e. the PCIe slot and the M.2, would be gen3.)

So what do you think? Any comments or warnings? Thank you. :)