Software versus Hardware RAID - Plans in Q3

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

TangoWhiskey9

Active Member
Jun 28, 2013
402
59
28
As we enter Q3 what are people planning in terms of hardware versus software RAID? Or are people experimenting with Lustre & HDFS more these days?

Our company is looking more at GlusterFS and we got approval last week to go ahead with a pilot in Q3.

Less interested in the merits, more interested to see what peers are doing in the near future.
 

gigatexal

I'm here to learn
Nov 25, 2012
2,913
607
113
Portland, Oregon
alexandarnarayan.com
While I don't have the work experience to talk about distributed file systems like the ones you mentioned, FWIW, I found that going to 3 IBM 1015s and software raid in ZFS to be much faster than hardware raid with a single IBM 5015 paired to a controller. This is likely due to the fact that I will have many more pci-e lanes providing bandwith to the drives but still, software raid has come a long way.
 

mrkrad

Well-Known Member
Oct 13, 2012
1,244
52
48
I've found that LSI controllers don't scale in random i/o as well as simple raid-1 - So I've been converting to raid-1 with smallest stripe size and using span's with esxi. Each span in ESXI get's its own queue depth and vmfs naturally spreads its load over the storage space so it's ghetto but it works.

The default stripe size is something most people are not aware of. 64kb stripe with 2 ssd is writing 32KB to each ssd for 1 byte change! HP smartarray raid-10 defaults to 128kb strip (strip * number of drives = stripe) . Many documentation interchange strip and stripe all the time (typo!).

128kb strip * 10 drives in raid-10 = 1.28 meg of writes per 1 byte changed. This causes latency and with ssd - unnecessary wear in some cases.

Latency kills random iops.

the m5014 just sucks with raid-10 and ssd, it doesn't scale in real life compared to many raid-1 volumes and manual load spreading.