Hardware or Software Raid for 30x6TB Windows Build

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

TedB

Active Member
Dec 2, 2016
123
33
28
45
Say your motherboard goes up in smoke and you get a new one. How do you recover your raid in storage spaces?
More or less move the drives to another server / disk shelf, disk order doesn't matter and then few powershell commands to get the storage online again. Maybe there are GUI tools as well, however I don't use gui just powershell as I am from Linux world.
 

acquacow

Well-Known Member
Feb 15, 2017
784
439
63
42
Say your motherboard goes up in smoke and you get a new one. How do you recover your raid in storage spaces?
I've moved my storage space between 3 different motherboards and two different VMs running in ESXi. The drives are auto-detected and assembled just fine.

I even moved them off of my LSI card that was in raid/jbod mode, cross-flashed it into IT mode for sata passthrough, and my storage space came back up just fine and didn't care that the disk adapter had changed.
 

Tom5051

Active Member
Jan 18, 2017
359
79
28
46
Windows software RAID is just awful, has been since it's inception as does dynamic disk. Can't believe it's even a consideration over hardware RAID.
 

acquacow

Well-Known Member
Feb 15, 2017
784
439
63
42
Windows software RAID is just awful, has been since it's inception as does dynamic disk. Can't believe it's even a consideration over hardware RAID.
This is an entirely different thing than the old windows software raid.
 

ServerSemi

Active Member
Jan 12, 2017
130
34
28
I'm seeing the same thing on my windows 10 machine. Writes start at 500 MB/s then after 2 seconds it goes down to 25-30 MB/s for the rest of the transfer. Thinking of buying a top of the line raid card to be honest.
 
  • Like
Reactions: gigatexal

Tom5051

Active Member
Jan 18, 2017
359
79
28
46
You'd only need an 8 port SAS2 / Sata3 (6Gb/s) RAID card with dual linking to expander capabilities, mechanical drives have a hard time saturating a sata2 connection so sata3 is plenty. Get a 36 port expander and you can add all the drives.
SAS3 (12Gb/s) is over kill in my opinion unless you plan to run SSDs as well.
 

acquacow

Well-Known Member
Feb 15, 2017
784
439
63
42
I'm seeing the same thing on my windows 10 machine. Writes start at 500 MB/s then after 2 seconds it goes down to 25-30 MB/s for the rest of the transfer. Thinking of buying a top of the line raid card to be honest.
I can sustain 300MB/sec write for hours... Writes are striped across 4 mirrored disks, thus why I can only get 300MB/sec.

Each drive is a 4TB HGST 3.5" drive. Will read/write at 150MB/sec per drive.

Sent from my XT1650 using Tapatalk
 

ServerSemi

Active Member
Jan 12, 2017
130
34
28
Too be honest I'm experiencing issues with my drives. Storage spaces could be good it is just my drives are writing slow.
 

i386

Well-Known Member
Mar 18, 2016
4,221
1,540
113
34
Germany
Compared to 1.3gb/s a single iodrive2 can do, the 1000 mb/s in the parity space are ~25% slower. More than I would expect for raid 5/6 styled "overhead".

Can you build a storage space with hdds in a parity space and the iodrives as dedicated writeback cache and/or journaling disks and post some benchmark results?
 
  • Like
Reactions: T_Minus

acquacow

Well-Known Member
Feb 15, 2017
784
439
63
42
I have these ioDrives running under ESXi, passed through to a windows 10 VM. I need to test one individually, but I think ESXi is limiting them a tad. I have warnings that the pci-e max request size is being limited to 128.

I'm out for the next few days, but when I get back home on Tuesday, I can play with it more.

I already built a storage space with the ioDrives as an SSD tier on top of a mirror space, but I did not see any performance increase over the plain mirror space. I even had a gig of write back cache setup.

See here: Fusion-io ioDrive 2 1.2TB Reference Page