Should i move away from my Areca 1280ML?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

jbrukardt

Member
Feb 4, 2016
89
39
18
89
I have an Areca 1280ML with 16 attached drives (4tb's right now). Its operating in RAID6 mode, with NTFS on top of them, but unfortunately the drives are starting to age and its time for me to swap them out for 8s. This would be the ideal time to change controllers and or filesystems.

To start, this controller has been fantastic for me. I have run 10-20 drive arrays on it 24/7 for more than 15 years and never lost a single piece of data. Its been very fast (700 MB/s or so) and extremely reliable. True enterprise class stuff.

However... its ancient, and I wanted opinions if I were to start over what the recommended options are.

My requirements are as follows:

1) Two drive failure support
2) Online expandability, I like to add drives and just grow the array cleanly
3) Speed. Still on spinning disk for now, so single disk speed doesnt cut it, I serve out enough from this machine to come close to saturating 10gbit, I dont want my disk array to limit the performance

Recommendations for either a new modern RAID card, or a filesystem that supports my needs (ZFS RAIDZ2?)

Will be starting with 8tb drives most likely, a minimum of 40TBs of usuable space worth.
 

StevenDTX

Active Member
Aug 17, 2016
493
173
43
I was quite hesitant to retire my Areca cards. They have provided me reliable service for many, many years.

That being said, I am quite happy using cheap HBAs with Optane and ZFS. I have not experienced a failure (knock on wood), so I dont know how easy/difficult recovery from a drive failure is. With the Areca, I got an email, I replaced the drive, and then got another email when it was finished rebuilding.
 

EffrafaxOfWug

Radioactive Member
Feb 12, 2015
1,394
511
113
ZFS doesn't get you nice and easy online expandability though; you can't just add a drive and grow your RAID6 on to it (i.e. turning a 7-drive RAID6 array into an 8-drive RAID6 array), you have to add a whole new vdev and extend that way; even then ZFS won't rebalance the array. The only "clean" way to do expansions is to use mirror vdevs only (i.e. RAID10) where you can add a pair of drives at a time. The lack of easy expandability of this sort at the low end is the primary reason I don't really use it at home.

If the OP is solely talking about growing arrays by replacing drives with larger ones though, then ZFS is fine for this.

NTFS though - is this a file server running windows or is this running on your workstation? Is changing the OS not a problem here?

As an aside, I was running 3ware cards when the Areca cards appeared and I was jealous as hell. But by the time my 3ware needed replacing I'd discovered softraid and never looked back...
 

jbrukardt

Member
Feb 4, 2016
89
39
18
89
ZFS doesn't get you nice and easy online expandability though; you can't just add a drive and grow your RAID6 on to it (i.e. turning a 7-drive RAID6 array into an 8-drive RAID6 array), you have to add a whole new vdev and extend that way; even then ZFS won't rebalance the array. The only "clean" way to do expansions is to use mirror vdevs only (i.e. RAID10) where you can add a pair of drives at a time. The lack of easy expandability of this sort at the low end is the primary reason I don't really use it at home.

If the OP is solely talking about growing arrays by replacing drives with larger ones though, then ZFS is fine for this.

NTFS though - is this a file server running windows or is this running on your workstation? Is changing the OS not a problem here?

As an aside, I was running 3ware cards when the Areca cards appeared and I was jealous as hell. But by the time my 3ware needed replacing I'd discovered softraid and never looked back...
Not tied to an os or filesystem. The box is primarily a file server, encoder, and NVR. Anything that needs to be windows can be run on a VM or docker instance.

Right now it's ntfs cause it was when I started the array with 4 drives 7 ish years ago
 

EffrafaxOfWug

Radioactive Member
Feb 12, 2015
1,394
511
113
Well I guess your NVR software is windows-only you might need to look in to any hardware support that might be needed.

FreeNAS is an option but there are caveats with ZFS as I've listed above, but I'd tentatively suggest looking into Proxmox VE as a base OS; it gives you the option of using basic linux softraid (very easy to expand/grow) or ZFS (via ZoL) as well as a very capable virtualisation stack (LXC containers and KVM); a couple of cheap HBAs and you'd be all set.

Following on from my anecdote about my 3ware, one of my cards failed once and I had to use another 3ware to get the drives read, which required a lot of downtime on one of my boxes; I dare say Areca cards would also suffer from the same problem of hardware RAID devices. HBAs cost a lot less and softraid doesn't leave you up cowpat creek if your card bites the dust.

Are you looking to build a whole new system in parallel to the existing one? Do you have time/budget to experiment? Because this is one of those topics that often gets much more complicated as more details emerge :D
 

jbrukardt

Member
Feb 4, 2016
89
39
18
89
Well I guess your NVR software is windows-only you might need to look in to any hardware support that might be needed.

FreeNAS is an option but there are caveats with ZFS as I've listed above, but I'd tentatively suggest looking into Proxmox VE as a base OS; it gives you the option of using basic linux softraid (very easy to expand/grow) or ZFS (via ZoL) as well as a very capable virtualisation stack (LXC containers and KVM); a couple of cheap HBAs and you'd be all set.

Following on from my anecdote about my 3ware, one of my cards failed once and I had to use another 3ware to get the drives read, which required a lot of downtime on one of my boxes; I dare say Areca cards would also suffer from the same problem of hardware RAID devices. HBAs cost a lot less and softraid doesn't leave you up cowpat creek if your card bites the dust.

Are you looking to build a whole new system in parallel to the existing one? Do you have time/budget to experiment? Because this is one of those topics that often gets much more complicated as more details emerge :D
I can do a dual system, or at least a dual array in one system for dev vs production. Plenty of spare hardware to do this one. I have another areca i can operate in passthrough to act as an HBA if needed.

I havent checked lately, but softraid parity used to be terribly inefficient, like 30/40 mb/s reads and worse writes.

A linux softraid with XFS on top could be an interesting option if its performant. XFS is easy to grow.
 

EffrafaxOfWug

Radioactive Member
Feb 12, 2015
1,394
511
113
I use ext4 myself but yeah XFS on mdadm/dm-raid works. I've basically been twiddling my thumbs for years for btrfs to become a contender...

Parity RAID has improved massively since I first started using it, to the extent that the bottleneck is almost always the discs themselves (albeit at higher CPU load than with hardware RAID) and CPUs are massively more powerful than they used to be; even a 10yr old CPU should be able to XOR at 2GB/s or more. My smallest RAID6 array (five spindles) is able to sustain 500MB/s sequential reads, 400MB/s sequential writes. I also use dm-cache on two of my arrays for vastle improved random performance, the same sort of thing can be achieved with ZFS L2ARC/SLOG (but again with their own set of caveats).

ZFS has the most guarantees against all the various failure modes of RAID (incl. the write hole) but that comes at the expense of "easy" expandability. Any form of softraid the LSI HBAs are the gold standard and can be had very cheaply, esp. in the US (but I've not used Areca pass-through).

If you've got the spare kit and are about to get some extra drives I'd recommend having a play with the various options to see if they'll fit your use-case.
 

jbrukardt

Member
Feb 4, 2016
89
39
18
89
Those speeds are a valuable data point. There is almost no data after 2011 on the internet about raid6 mdraid benchmarks. If i can get upwards of 800/700 ill be happy, usually with 7+ disks
 

EffrafaxOfWug

Radioactive Member
Feb 12, 2015
1,394
511
113
Yeah, I think it was circa that time when mdraid reached parity (arf) with hardware RAID, so most people who were on the fence stopped using hardware RAID.

It's not without its pitfalls of course - plenty of people still experience crappy softraid performance due to poor SATA controllers (Marvell and especially JMicron in particular); one of the reasons the LSI controllers are so popular is that as well as being hackable and affordable is that they give top tier performance for many softraid workloads, as well as avoiding the use of a zillion SATA cables.

I should add that I've modified mdraid stripe cache settings to increase the amount of RAM available for RAID6 stripes, potentially resulting in more data in flight in the event of power loss, where as in a hardware RAID card the cache is usually backed by a battery or supercap. FWIW I also don't use RAID6 for my primary array as the rebuild times verge on the ridiculous (bottlenecked by the IO of the discs themselves) whereas RAID10 rebuilds are extremely fast; if it wasn't for the fact it doesn't rebalance (you need to rewrite the data to the expanded volume in order to rebalance), I'd have considered using ZFS mirrors instead. Automatic checksums are nice but not a must-have for me.

I also don't use 10GbE, so those >400MB/s speeds are never tested in real life over CIFS/NFS/iSCSI, only locally.