HDD raid Z2 array or (cheap)Nvme raid Z1 for mass/backup storage.

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

crashpb

New Member
Jun 9, 2022
7
0
1
I was planning to build a backup/mass storage nas using 16*6tb WD red drives (two raid z2 pools) but after seeing recent prices of nvme ssds (crucial p3 plus to be exact) I was thinking of using 16*4tb crucial p3s in raidz1 mode to get close to the same total available storage, if I factor in price of hba and cables and such the total cost of both is not gonna be that far off each other.
obviously the HDD route is much more expandable and it has two redundant drives per pool on the other hand the nvme route has speed, ease of installation, power consumption in its favor.

the priority for the stored data is it's safety but I'm not sure how can I translate hdd reliability to ssds especially if we are talking zfs and raid z.

I do plan to expand the storage in the future However I`m not gonna need a a lot of raw capacity (this 40-50 tb setup is going to be enough for quite a while) so I still think the nvme route gonna be able to handle even the expansion (mb has 7* x16 pcie 4.0 ports).

so all and all which one would you people advice?
 

rtech

Active Member
Jun 2, 2021
365
133
43

Sean Ho

seanho.com
Nov 19, 2019
847
402
63
BC, Canada
seanho.com
16x 6TB raidz2 HDD is very different from 16x 4TB raidz1 NVMe. For bulk storage / backup, go with big cheap HDD. Perhaps recheck pricing on SAS HDDs in your area; in the US 8TB for $40-45 is common. SAS2 HBAs are under $20.

With 16x NVMe, how were you planning on connecting them? Retimer/PLX cards are quite expensive; cabling/backplanes are expensive; even bifurcating cards are not cheap compared to SAS HBAs.

 
  • Like
Reactions: Aluminat

rtech

Active Member
Jun 2, 2021
365
133
43
Forgot to mention this adapter requires working bifurcation

Mass die off would be my biggest concern can happen anytime you could end up with non-functioning array
 

crashpb

New Member
Jun 9, 2022
7
0
1
my mb has pcie bifurcation + 7* x16 pcie 4.0 slots, and I plan to use x16 > 4* x4 pcie bifurcation cards.
again price wise HDD and SSD setups are going to cost about the same for me and the main question in how should I compare HDD and SSD reliability, considering hdds are going to be in raidz2 and ssds are going to be in raidz1.
 

rtech

Active Member
Jun 2, 2021
365
133
43
Both HDDs and SSDs can die at beginning but after burn in HDD last until wear and tear gets them.
SSDs can die anytime and when they reach their wear limits they are at huge risk of developing data corrupting errors if not total failures.

FWIW i read on blog of medium sized datacenter that SSDs tend to fail at slightly higher rates.

- Do a little burn in badlocks -svw, smartctl --test=long
- If you need it running 24/7 get spares
- Setup smartctl warnings via email
- Watch TBW written and replace when reaching your threshold.
 

whit3ryno

New Member
Jan 8, 2023
2
1
3
im running a 12 drive array of sandisk 1.92 TB sata ssd's 11 in a single z2 with the last one as a hot spare. i picked up slightly sues enterprise stuff all drives are still 94+% health / life left. it might be worth looking in to that as well rather than cheap consumer stuff.