Hardware or Software Raid for 30x6TB Windows Build

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Mikhail

New Member
Feb 15, 2017
22
0
1
32
I'm building a workstation that is going to be used to process a lot of research data. I built a few of these before but not really sure if I got good performance. Its gotta run Windows Server 2016.

At the end of the day the storage needs to be exposed as a 30x6TB single drive.

My last build used a RAID6 with an Adaptec controller and a splitter to get more ports. The whole ~120TB was then set to RAID6.

I'm trying to figure out how to do it better.

I heard Storage Spaces has RAID6 support as of Server 2016? Anybody have any experience what a a failure scenario feels like? Maybe put a few HighPoint Rocket 750 in the case... the card doesn't actually have RAID? If I put two cards in, do I get better sequential write performance?

From the hardware RAID perspective I was thinking of maybe putting two Adaptec 8805 in the case. Not sure how this would work out.

Any thoughts?
 
Last edited:

acquacow

Well-Known Member
Feb 15, 2017
786
439
63
42
ReFS in storage spaces - protects against bit rot, is super easy to use and scales well.

I recommend setting up your drives in storage spaces, with a few columns so that you can stripe across them for extra speed.

I can write you the command-line powershell if you are interested. You can't do the column striping in windows 10 w/o using powershell. I think you can do it all in GUI in server 2012.
 

acquacow

Well-Known Member
Feb 15, 2017
786
439
63
42
There's really no reason for it anymore... CPUs have so much extra compute...

You basically want something like this:

---------------------------------------------
#Creation of 30x HDD 2-column mirror
---------------------------------------------
Get-PhysicalDisk -CanPool $True | ft FriendlyName,OperationalStatus,Size,MediaType
$pd = (Get-PhysicalDisk -CanPool $True | Where MediaType -NE UnSpecified)
New-StoragePool -PhysicalDisks $pd –StorageSubSystemFriendlyName “Windows Storage*” -FriendlyName “StoragePool”
New-VirtualDisk -StoragePoolFriendlyName "HDDPool" -FriendlyName BulkStorage -ResiliencySettingName Mirror -NumberOfColumns 15 -UseMaximumSize
Get-VirtualDisk BulkStorage | Get-Disk | Initialize-Disk -PartitionStyle GPT
Get-VirtualDisk BulkStorage | Get-Disk | New-Partition -DriveLetter “E” -UseMaximumSize
Initialize-Volume -DriveLetter “E” -FileSystem REFS -Confirm:$false

Take care and adjust that first "Get-PhysicalDisk" line and make sure only your 30 HDDs are listed, else, disable any free disks you don't want in the pool in disk management to make your life easier.

This will create a software mirror with 15 columns to stripe the data for maximum read/write perf.

Should give you 2GB/sec write, 4GB/sec read perf.
 
Last edited:

i386

Well-Known Member
Mar 18, 2016
4,241
1,546
113
34
Germany
Even with Server 2016 I never got decent performance from parity spaces. I tried different combinations of ssds as write back and journaling drives, but even then writes where slower than a single WD 3tb green drive.

@Mikhail
For 100+ tb I wouldn't use raid 6, but raid 60.

If you need a lot of space (raid 5/6 volumes or the the nested versions raid 50/60) and good write speeds I would look at raid controllers with ssd caching technologies, like LSIs cachecade or Adaptecs maxcache.
 
  • Like
Reactions: gigatexal

Mikhail

New Member
Feb 15, 2017
22
0
1
32
Even with Server 2016 I never got decent performance from parity spaces. I tried different combinations of ssds as write back and journaling drives, but even then writes where slower than a single WD 3tb green drive.

So, if I put together 30 drives, add RAID60 I'll get worse performance than a single drive?
 

i386

Well-Known Member
Mar 18, 2016
4,241
1,546
113
34
Germany
Yes, that's what I got when I tested storage spaces and parity spaces in Windows Server 2016.
 

Mikhail

New Member
Feb 15, 2017
22
0
1
32
Yes, that's what I got when I tested storage spaces and parity spaces in Windows Server 2016.
Is that because storage spaces really sucks, or would one get similar performance from a comparable linux software RAID?

On previous build, using an Adaptec hardware RAID controller and Intel spliter, I got 2000 MB/s on a new 24x6 TB array. As the drive got full it drops to something like 400 MB/s. Benchmarks with CrystalDiskMark.

--Edit--
Hardware RAID with Adaptec 8805=>RES3TV360 24x6TB HGST
crystal_benchmark.png
 
Last edited:

acquacow

Well-Known Member
Feb 15, 2017
786
439
63
42
Parity perf isn't the greatest, but it wasn't awful, but that's why I use two-way-mirroring.

Here are some numbers I tested with some SSDs
Two-way mirror.


Parity:


No resiliency (clearly it doesn't stripe):


For comparison here's the same 3 ioDrives in a normal striped dynamic disk volume on the same host (no storage spaces):


And with storage spaces in a 2-way mirror:


Specs on these drives for large block read is 1.5GB/sec each, 1.3GB/sec write.
I'm going to lump in a 4th ioDrive soon as soon as I move my ESX datastore off of the 4th card, then I can test a 2-column stripe and all other configurations to get some final perf #s. I can't imagine a 2-way mirror is optimized in any way on an odd number of devices...

Also, this is all running in a windows 10 VM with the drives passed through. I don't have the bios on the box set for max perf, so there could be some CPU throttling affecting results as well.
 
Last edited:
  • Like
Reactions: Mikhail

acquacow

Well-Known Member
Feb 15, 2017
786
439
63
42
I get very similar perf in mdadm arrays. Problem with mdadm is that I can't do a 2-way mirror with 3 devices.

Sent from my XT1650 using Tapatalk
 

TedB

Active Member
Dec 2, 2016
123
33
28
45
@gigatexal when was the last time you had a look at storage spaces ? Lot and a mean a lot has changed in last two years even though it was not advertised by MS. I am not a MS evangelist however I can say that currently storage spaces while using mirror (not parity) is rather efficient and stable.