Hardware vs software RAID5 (ext4 vs ZFS)

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Un Nameless

New Member
Aug 17, 2021
9
0
1
Hi folks,

I've got a 100 drive JBOD connected to my server via a 12 SAS connection to an LSI 9580 RAID card! I'll use this JBOD for large file storage!

I've got two options as far as I'm aware: either go with the BIOS management of the RAID controller, and create there a couple of RAID5 volumes and let the hardware do the RAID stuff, and thus format in software said volumes with EXT4 (because at this point using ZFS would yield few if any advantages to ext4) or treat the drives in BIOS as JBODs and skip the hardware RAID and go in software with ZFS pools for RAIDZ1.

What are your thoughts? What is the best approach and what are the drawbacks of both?

Thank you!
 

dswartz

Active Member
Jul 14, 2011
610
79
28
Whether SW or HW, 100 drives is a hell of a lot to have in only a couple of pools! Just looking to get hosed there!
 

dswartz

Active Member
Jul 14, 2011
610
79
28
Well, I'd prefer ZFS over ext4 for 2 reasons: transparent compression and checksumming to protect data. Also makes you less dependent on vendor-specific RAID.
 

Un Nameless

New Member
Aug 17, 2021
9
0
1
Well, I'd prefer ZFS over ext4 for 2 reasons: transparent compression and checksumming to protect data. Also makes you less dependent on vendor-specific RAID.
Do you think there will be significant overhead for software RAID vs hardware?
 

dswartz

Active Member
Jul 14, 2011
610
79
28
That would depend on the sw and hw implementations, which is getting way out of my area of expertise...
 

gea

Well-Known Member
Dec 31, 2010
3,163
1,195
113
DE
On a crash during write a conventional raid or filesystem can become corrupted.
ZFS is immune due copy on write and when a problem happens ZFS can detect and repair the problem on the fly due checksums and redundancy, "Write hole" phenomenon in RAID5, RAID6, RAID1, and other arrays.

ZFS Copy on write offers snaps (versioning and Ransomware protection)

On problems with older filesystems, you need an offline fschk. Without checksums only some structural problems can be repaired, can last days of offline state. ZFS checks (scrubs) are online.

A Raid-5 is in danger of a complete array lost when a disk fails with an additional read error during rebuild. ZFS Z1 will only report a damaged file in same case. With more than say 5 disks per vdev use Z2, with say 10-16 disks per vdev use Z3. With more disks add vdevs.

Software raid calculation is a minor load problem for a modern CPU that is capable for realtime 3D. Only ZFS encryption adds a serious CPU load.

For ZFS use a simple HBA ex LSI 9300 or OEM, best with IT firmware, IR firmware is ok.
 
Last edited:
  • Like
Reactions: gigatexal

Sean Ho

seanho.com
Nov 19, 2019
774
357
63
Vancouver, BC
seanho.com
HBA (or RAID card with IT mode) also gives you flexibility to upgrade the card in the future, or swap in a cheaper HBA as a temporary solution if/when it fails.

Not sure if the 9580 has a true IT mode, though.
 

ari2asem

Active Member
Dec 26, 2018
745
128
43
The Netherlands, Groningen
if you have the money and know where to buy this, then look at


for me ZFS has one big disadvantage:
you can't expand your pool/array later with new disks, like hardware raid-5 can.
this is the only reason why i don't use ZFS.

beside this disadvantage, is ZFS great
 

gea

Well-Known Member
Dec 31, 2010
3,163
1,195
113
DE
if you have the money and know where to buy this, then look at


for me ZFS has one big disadvantage:
you can't expand your pool/array later with new disks, like hardware raid-5 can.
this is the only reason why i don't use ZFS.

beside this disadvantage, is ZFS great
You can replace all disks in a ZFS vdev to increase capacity or you can add more vdevs similar ro raid-5 -> raid 50
You cannot extend a single vdev ex a 6 disk Z2 to a 7 disk Z2 (there is work to allow this).

The above restriction is more relevant at home for thos wanting to go from 6 disks to 7 disks not for larger setups where you add vdev ex go from 1 x 6 disks z2 to n x 6 disks z2.
 

Un Nameless

New Member
Aug 17, 2021
9
0
1
On a crash during write a conventional raid or filesystem can become corrupted.
ZFS is immune due copy on write and when a problem happens ZFS can detect and repair the problem on the fly due checksums and redundancy, "Write hole" phenomenon in RAID5, RAID6, RAID1, and other arrays.

ZFS Copy on write offers snaps (versioning and Ransomware protection)

On problems with older filesystems, you need an offline fschk. Without checksums only some structural problems can be repaired, can last days of offline state. ZFS checks (scrubs) are online.

A Raid-5 is in danger of a complete array lost when a disk fails with an additional read error during rebuild. ZFS Z1 will only report a damaged file in same case. With more than say 5 disks per vdev use Z2, with say 10-16 disks per vdev use Z3. With more disks add vdevs.

Software raid calculation is a minor load problem for a modern CPU that is capable for realtime 3D. Only ZFS encryption adds a serious CPU load.

For ZFS use a simple HBA ex LSI 9300 or OEM, best with IT firmware, IR firmware is ok.
Thanks for the elaborate input! I was leaning towards ZFS not only for the goodies you mentioned but also more control from software!

I guess though that money was wasted on such an expensive LSI RAID controller, that, I assume, once set as JBOD and the rest done at software, becomes just a pass through device.
 

Un Nameless

New Member
Aug 17, 2021
9
0
1
if you have the money and know where to buy this, then look at


for me ZFS has one big disadvantage:
you can't expand your pool/array later with new disks, like hardware raid-5 can.
this is the only reason why i don't use ZFS.

beside this disadvantage, is ZFS great
Isn't this more of a solution for NVME RAIDs or solid state instead of traditional spinners? I can't see what advantage I'd get as long as I still use 100 Seagate Exos X18 7200 rpm drives.