It's time to upgrade my M5015s

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

eptesicus

Active Member
Jun 25, 2017
151
37
28
35
I have a hypervisor and two NAS systems I built. Each server is built in a Supermicro chassis with SAS2 backplane, and I'm currently using IBM M5015s with BBUs for each server. The cards have performed well, and I haven't experienced any real issues when them. However, each NAS currently has 14x 4TB drives in them, with NAS01 running in RAID 5, and NAS02 running in RAID 6 (with the key).

My worry right now is that I add more HDDs every couple months. When I do this, the reconstruction time to add drives to the arrays is 11+ days. Since NAS02 is a copy of NAS01, I've lately just been deleting the array on one, building the array again with the new drives, and copying data back over. I'll then do this for the other server to add drives to it... I don't feel safe doing this (especially with NAS01 only operating with R5 right now), and have been thinking about upgrading the RAID controllers to something faster in each to decrease the reconstruction time.

I'm looking for recommendations. I don't know how much I want to spend just yet, but I know a new $700 card isn't going to fly (well, I'm sure it will in terms of performance, just not with the wallet). Used M5210s with a cache card seem decent, but what else is available to achieve this? Are there better performing cards for the money? I'll look to get two for the NAS' at first, but might replace the M5015 in my hypervisor down the road, as I could replace the backplane with SAS3 to match my 12GBps drives in there.
 

i386

Well-Known Member
Mar 18, 2016
4,220
1,540
113
34
Germany
I don't think that a new controller with sas3 will decrease the rebuild/reconfiguration time. Hdds can barely saturate sata1 (1.5gbit/s ~ 150mb/s).

If you want keep using hardware raid (Hypervisor = hyper-v?) I would say look at controllers with ssd caching technologies like CacheCade or MaxCache. I have a raid controller with maxcache (2x hgst sas3 ssd in raid 1) and when I added two 6tb hdds to my raid 6 array (8x 6tb hgst nas > 10x 6tb hdds) the rebuild took about 48hours.

I've lately just been deleting the array on one, building the array again with the new drives, and copying data back over.
It's silly, but you could do the same with zfs based storage servers. Destroy array on nas1, add new hdds, create new zfs array, copy back data from nas2. Repeat for nas2.:confused:
 

eptesicus

Active Member
Jun 25, 2017
151
37
28
35
Thanks for the input, @i386. I didn't consider the HDD interface to be the bottleneck. I am using Hyper-V on my host, and my NAS' currently run Server 2016. My hosts are E3's with 32GB of RAM, and used to be my hypervisors, but I built a better performing host for that. I had thought about moving to ZFS, but am comfortable with what I have right now. I liked the idea of just being able to add disks without losing data, but that just takes too long.

How is it that the SSDs for cache actually sped up the rebuild time? It was my understanding that files that get used the most would be cached, or new files would be written there before being officially written to the array.
 

i386

Well-Known Member
Mar 18, 2016
4,220
1,540
113
34
Germany
RAID Controller Workshop with Blocks, Not Files.;)

The Controller Caches the writes on the ssds, Orders them and the writes them sequetial in the hdds.
 

eptesicus

Active Member
Jun 25, 2017
151
37
28
35
Well now that makes sense... I suppose that I can't keep my existing cards and use an LSI00292 to add the cachecade features? I imagine that this will have to replace my advanced features key? I see that the performance accelerator key (81Y4426) adds the features, but I don't see them anywhere anymore.

What card should I get to get RAID 6 and cachecade? And what should I look for to use as a cache? What size SSDs (or maybe something like a Fusion IO) should I look at getting if I currently have 56TB of storage?