Should i use Megaraid controller or just go with FreeNAS software RAID?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

CodingFiend

New Member
Oct 20, 2021
1
0
1
Have had excellent luck for 7 years with a BroadCom Megaraid card, with cachevault protection, running a NAS with 12 4TB seagate drives.
Want to replace this with 12 3.8TB SSD's (planning to use Intel's latest DC drives). Those Seagate enterprise drives have been magnificent, except for the firmware bug which bricked them after a certain number of hours (which i was nimble enough to fix before we lost more than 3).

The burning question, is should i bother to run the RAID card. It doesn't seem like ZFS has RAID 6. Raid 6 is superior to Raid 5, in that you can lose 2 drives without data loss in Raid 6 (60).

Perhaps someone can chime in with their experiences.

SSD will be a lot faster, and iexpect to connect it with 10Gbit ethernet. But the question is, should i use another RAID card?
 

ericloewe

Active Member
Apr 24, 2017
295
129
43
30
It doesn't seem like ZFS has RAID 6
It would seem incorrectly. ZFS has RAIDZ1, broadly equivalent to RAID5; RAIDZ2, broadly equivalent to RAID6; and RAIDZ3, which extends this to three disks and never caught on in the world of HW RAID.
 

gea

Well-Known Member
Dec 31, 2010
3,155
1,193
113
DE
While ZFS Z2 is equivalent to Raid 6 regarding number of disks that can fail and a two x Raid Z2 is like Raid 60, you go can go beyond. You can use Z3 that allows three disks to fail and you can go beyond Raid 60 as ZFS is not limited in striping two x Z2 but n x Z2.

More important and the reason why ZFS does not call its Raid levels 5 or 6 is that it is not affected by the write hole problem of Raid 5/6 due Copy on Write (avoids corrupted Raid or filesystem on a crash during write). Additionally ZFS adds checksums to data and metadate. This allows a realtime verify of data during read and autorepair (self healing filesystem). Even on a mirror only ZFS is capable to decide on problems which mirror is good or bad, "Write hole" phenomenon in RAID5, RAID6, RAID1, and other arrays.

While hardware Raid offers write cache protection via BBU/Flash, ZFS has a much larger write cache (faster) with a superiour and faster protection with its Slog concept. ZFS has superiour cache strategies and cache is RAM, much faster then the fastest NVMe (around 10% RAM write cache, 80% of free RAM readcache)

Not to forget that you can speed up data thoughput on ZFS based on structural items like small io, metadata, selected filesystems or dedup with the special vdev concept. Mostly superiour to storage tiering based on hot or cold data decisions.

btw
while ZFS is superiour to traditional Raid, you should never use a hardware Raid but a dumb HBA instead as otherwise despite ZFS you are affected by the write hole problem and loose autorepair as this requires that ZFS can see all disks and data.
 
Last edited:
  • Like
Reactions: Fritz and ericloewe