RAID Config for Archive Server

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Mr. F

Active Member
Sep 5, 2011
172
30
28
I need to configure a multi disk setup on a Windows Server system. Looking for different ideas and opinions and their merits. I'll use Windows Server 2012 R2, or if Windows Server 2016 is out by the time I make the purchases I'll use that.

Goal of this system is to have high capacity, high speed and tolerate several disk failures. The data will be backed up elsewhere, so a full failure will be painful, but not catastrophic.

Will use a Supermicro 846 chassis with SAS3 backplane expander and 10Gb Nics. Will have actively used shares and archival. Disk setup will be the following.

10x HGST He10 10TB SAS HDD -OR- 10x He8 8TB SAS HDD (may expand to 20 drives when we get the budget)
4x Intel S3610 800GB SSD

I can think of:
Storage Spaces
  • Mirror with SSD tier
  • Parity with SSD tier
LSI/Adaptec,etc RAID Card. If going this route, which card would you suggest and which RAID level?

How would you do it?
 

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,650
2,066
113
My understanding is that the HE10 are SMR and not meant for "actively used shares" only archive, they're rather slow too (<70MB/s write).

Are you planning to use the HE10 for storage/archive, and the Intel SSD for "active shares", if not that's the way I'd do it.

Depending on the performance needed, I'd run the 4xSSD in RAID10, and your archive rAId6.
 

j_h_o

Active Member
Apr 21, 2015
644
180
43
California, US
Storage Spaces and hardware RAID don't mix. You need to directly expose the disks, so if you went this route, just get an HBA.

I've done some small setups with 5 or 6 drives with Storage Spaces and SSD, but never got the throughput I was expecting. How many users, and what kind of IOPS are you expecting?

I'd do hardware RAID6 on an LSI card. I like the older 3ware cards 'cause I like the management interface. I've had no end of problems with Adaptec 8 series on 2012R2.
 

gea

Well-Known Member
Dec 31, 2010
3,173
1,197
113
DE
For a long term archive server you must face the silent data corruption problem especially with high capacity storage. If you must use Windows, check ReFS with data checksums enabled and regular scrubbing.

But while Windows is on the right way with ReFS as a ZFS clone, it is not yet comparable regarding performance, features or reliability.

Have you ever thougt of a webmanaged ZFS storage appliance with a raid-Z2 or z3 pool (allows any three disks to fail) with monthly data scrubs to find and repair silent errors based on data checksums. On the controller side, you need raidless HBA adapters like LSI 9207 as ZFS is software raid without the write hole problem of hardware raid. You do not need SSD cache devices for performance with ZFS and archive systems. Use faster RAM caching instead.

If possible, add a second system on a different location and enable replication between them. Use the versioning feature with snaps on ZFS for read only previous versions for a file history.
 

Chuckleb

Moderator
Mar 5, 2013
1,017
331
83
Minnesota
From what I understand, the SMR drives may have problems with being in a RAID for long-term performance. Unfortunately, there are not that many choices when you go up to that level of capacity, SMR drives are the next thing. It looks like Hitachi still sells the 8TB in a regular He8 which you specified as an option and would be the best case.

If you were doing Hardware RAID6, we usually do packs of 12 drives, 9 data, 2 parity, and 1 hotspare. This scales well as you add additional packs, you get more global hotspares and is important if you're not close to your datacenter. The LSI cards have the ability to do cachecade which puts SSDs in between the disk and system, helping cache more data.

ZFS is interesting, I just started using that and it's working out pretty well. That's not an option for the windows server side of things, but another option could be what gea suggested. Or if you wanted to get creative, you could do a ZFS filer for the disks and share out raw volumes to a Windows server to manage? Shove a Mellanox FDR IB card on both machines, back-to-back to reduce latency and giving you full bandwidth. Kind of silly, but you get the ZFS protection and the Windows frontend for managing the volumes.

I'm full of crazy ideas.
 
  • Like
Reactions: whitey

Mr. F

Active Member
Sep 5, 2011
172
30
28
Thanks for responses so far! A couple more statements to answer questions
  • The 846 SuperServer has a SAS3 HBA already - if going a RAID card I'm looking for a card model suggestion and RAID level config
  • ReFS is going to be the filesystem
  • Windows Server is the only option for this project
  • Idea was to use 10x HDD and use SSDs either as tiering (Storage Spaces) or caching (RAID card)
  • Regarding He10 10TB performance - I may go with He8 SAS 8TB drives if that's an issue. 10TB may even be too expensive, I don't have pricing yet.
 

Mr. F

Active Member
Sep 5, 2011
172
30
28
...I've done some small setups with 5 or 6 drives with Storage Spaces and SSD, but never got the throughput I was expecting. How many users, and what kind of IOPS are you expecting?

I'd do hardware RAID6 on an LSI card. I like the older 3ware cards 'cause I like the management interface. I've had no end of problems with Adaptec 8 series on 2012R2.
Same experience here with storage spaces. I'm hoping that with more disks and SSD tiering I can get better performance out of it. What type of issues have you seen with the Adaptec cards? From what I see they are made by PMC which makes my absolute favorite HP Smart Array cards, but I'm not sure about using an HP SA card in a non-HP production server.
 

Mr. F

Active Member
Sep 5, 2011
172
30
28
...
If you were doing Hardware RAID6, we usually do packs of 12 drives, 9 data, 2 parity, and 1 hotspare. This scales well as you add additional packs, you get more global hotspares and is important if you're not close to your datacenter. The LSI cards have the ability to do cachecade which puts SSDs in between the disk and system, helping cache more data.
...
Thanks for the suggestion. How do you scale (if in the same chassis)? RAID60 or do you just add to the existing array for more capacity?

This is for an on prem installation in our (small) server room.
 

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,650
2,066
113
I'm confused. You said it was an archive, but then you said SSD caching... you don't need that for an archive because the data isn't commonly accessed. HE10 would require 2x+ the drives than other enterprise drives to = same performance due to slow write speed, and mediocre read.

Really, the HE10 isn't an option if you are going to use it for accessing data routinely, especially routinely enough to warrant a caching layer.

The He8 while huge capacity, has the same error rate as the WD RED NAS drives... not sure why/how they consider it "Enterprise" on that fact alone. (SSD have an even lower error rate)


If it's purely archive ditch the SSD, and go HE10.
If not archive HE8 and deal with the higher error rate if you need that density, if not I'd go with a drive with a lower error rate.


My 02.
 

Chuckleb

Moderator
Mar 5, 2013
1,017
331
83
Minnesota
For scaling, we add a second pack of disks and then add another volume. Since we use lustre, we can combine the scenes for the same address space.