1. This site uses cookies. By continuing to use this site, you are agreeing to our use of cookies. Learn More.

Hardware or Software Raid for 30x6TB Windows Build

Discussion in 'RAID Controllers and Host Bus Adapters' started by Mikhail, Feb 15, 2017.

  1. Mikhail

    Mikhail New Member

    Joined:
    Feb 15, 2017
    Messages:
    6
    Likes Received:
    0
    I'm building a workstation that is going to be used to process a lot of research data. I built a few of these before but not really sure if I got good performance. Its gotta run Windows Server 2016.

    At the end of the day the storage needs to be exposed as a 30x6TB single drive.

    My last build used a RAID6 with an Adaptec controller and a splitter to get more ports. The whole ~120TB was then set to RAID6.

    I'm trying to figure out how to do it better.

    I heard Storage Spaces has RAID6 support as of Server 2016? Anybody have any experience what a a failure scenario feels like? Maybe put a few HighPoint Rocket 750 in the case... the card doesn't actually have RAID? If I put two cards in, do I get better sequential write performance?

    From the hardware RAID perspective I was thinking of maybe putting two Adaptec 8805 in the case. Not sure how this would work out.

    Any thoughts?
     
    #1
    Last edited: Feb 15, 2017
  2. gigatexal

    gigatexal I'm here to learn

    Joined:
    Nov 25, 2012
    Messages:
    1,633
    Likes Received:
    226
    ZFS - protects against bit rot, is super easy to use and scales well.
     
    #2
  3. Mikhail

    Mikhail New Member

    Joined:
    Feb 15, 2017
    Messages:
    6
    Likes Received:
    0
    Ain't no ZFS for Windows.
     
    #3
  4. acquacow

    acquacow Member

    Joined:
    Feb 15, 2017
    Messages:
    44
    Likes Received:
    13
    ReFS in storage spaces - protects against bit rot, is super easy to use and scales well.

    I recommend setting up your drives in storage spaces, with a few columns so that you can stripe across them for extra speed.

    I can write you the command-line powershell if you are interested. You can't do the column striping in windows 10 w/o using powershell. I think you can do it all in GUI in server 2012.
     
    #4
  5. Mikhail

    Mikhail New Member

    Joined:
    Feb 15, 2017
    Messages:
    6
    Likes Received:
    0
    So, basically give up on hardware RAID?
     
    #5
  6. acquacow

    acquacow Member

    Joined:
    Feb 15, 2017
    Messages:
    44
    Likes Received:
    13
    There's really no reason for it anymore... CPUs have so much extra compute...

    You basically want something like this:

    ---------------------------------------------
    #Creation of 30x HDD 2-column mirror
    ---------------------------------------------
    Get-PhysicalDisk -CanPool $True | ft FriendlyName,OperationalStatus,Size,MediaType
    $pd = (Get-PhysicalDisk -CanPool $True | Where MediaType -NE UnSpecified)
    New-StoragePool -PhysicalDisks $pd –StorageSubSystemFriendlyName “Windows Storage*” -FriendlyName “StoragePool”
    New-VirtualDisk -StoragePoolFriendlyName "HDDPool" -FriendlyName BulkStorage -ResiliencySettingName Mirror -NumberOfColumns 15 -UseMaximumSize
    Get-VirtualDisk BulkStorage | Get-Disk | Initialize-Disk -PartitionStyle GPT
    Get-VirtualDisk BulkStorage | Get-Disk | New-Partition -DriveLetter “E” -UseMaximumSize
    Initialize-Volume -DriveLetter “E” -FileSystem REFS -Confirm:$false

    Take care and adjust that first "Get-PhysicalDisk" line and make sure only your 30 HDDs are listed, else, disable any free disks you don't want in the pool in disk management to make your life easier.

    This will create a software mirror with 15 columns to stripe the data for maximum read/write perf.

    Should give you 2GB/sec write, 4GB/sec read perf.
     
    #6
    Last edited: Feb 15, 2017
  7. Mikhail

    Mikhail New Member

    Joined:
    Feb 15, 2017
    Messages:
    6
    Likes Received:
    0
    So, how many Rocket 750 cards?
     
    #7
  8. gigatexal

    gigatexal I'm here to learn

    Joined:
    Nov 25, 2012
    Messages:
    1,633
    Likes Received:
    226
    ahh windows only, ugh. ReFS I guess. I've never had good luck with storage spaces but sounds like they fixed things in 2016.
     
    #8
  9. i386

    i386 Active Member

    Joined:
    Mar 18, 2016
    Messages:
    160
    Likes Received:
    26
    Even with Server 2016 I never got decent performance from parity spaces. I tried different combinations of ssds as write back and journaling drives, but even then writes where slower than a single WD 3tb green drive.

    @Mikhail
    For 100+ tb I wouldn't use raid 6, but raid 60.

    If you need a lot of space (raid 5/6 volumes or the the nested versions raid 50/60) and good write speeds I would look at raid controllers with ssd caching technologies, like LSIs cachecade or Adaptecs maxcache.
     
    #9
    gigatexal likes this.
  10. Mikhail

    Mikhail New Member

    Joined:
    Feb 15, 2017
    Messages:
    6
    Likes Received:
    0

    So, if I put together 30 drives, add RAID60 I'll get worse performance than a single drive?
     
    #10
  11. i386

    i386 Active Member

    Joined:
    Mar 18, 2016
    Messages:
    160
    Likes Received:
    26
    Yes, that's what I got when I tested storage spaces and parity spaces in Windows Server 2016.
     
    #11
  12. Mikhail

    Mikhail New Member

    Joined:
    Feb 15, 2017
    Messages:
    6
    Likes Received:
    0
    Is that because storage spaces really sucks, or would one get similar performance from a comparable linux software RAID?

    On previous build, using an Adaptec hardware RAID controller and Intel spliter, I got 2000 MB/s on a new 24x6 TB array. As the drive got full it drops to something like 400 MB/s. Benchmarks with CrystalDiskMark.

    --Edit--
    Hardware RAID with Adaptec 8805=>RES3TV360 24x6TB HGST
    crystal_benchmark.png
     
    #12
    Last edited: Feb 15, 2017
  13. acquacow

    acquacow Member

    Joined:
    Feb 15, 2017
    Messages:
    44
    Likes Received:
    13
    Parity perf isn't the greatest, but it wasn't awful, but that's why I use two-way-mirroring.

    Here are some numbers I tested with some SSDs
    Two-way mirror.
    [​IMG]

    Parity:
    [​IMG]

    No resiliency (clearly it doesn't stripe):
    [​IMG]

    For comparison here's the same 3 ioDrives in a normal striped dynamic disk volume on the same host (no storage spaces):
    [​IMG]

    And with storage spaces in a 2-way mirror:
    [​IMG]

    Specs on these drives for large block read is 1.5GB/sec each, 1.3GB/sec write.
    I'm going to lump in a 4th ioDrive soon as soon as I move my ESX datastore off of the 4th card, then I can test a 2-column stripe and all other configurations to get some final perf #s. I can't imagine a 2-way mirror is optimized in any way on an odd number of devices...

    Also, this is all running in a windows 10 VM with the drives passed through. I don't have the bios on the box set for max perf, so there could be some CPU throttling affecting results as well.
     
    #13
    Last edited: Feb 15, 2017
    Mikhail likes this.
  14. gigatexal

    gigatexal I'm here to learn

    Joined:
    Nov 25, 2012
    Messages:
    1,633
    Likes Received:
    226
    Storage spaces is such a letdown when it comes to writes.
     
    #14
  15. acquacow

    acquacow Member

    Joined:
    Feb 15, 2017
    Messages:
    44
    Likes Received:
    13
    How so? I get near-native performance on my flash and hdd tiers...
     
    #15
  16. gigatexal

    gigatexal I'm here to learn

    Joined:
    Nov 25, 2012
    Messages:
    1,633
    Likes Received:
    226
    @i386 and I had terrible experiences. I wonder did you try your drives in say Linux using just mdadm? What perf do you get without the SSD caching?
     
    #16
  17. acquacow

    acquacow Member

    Joined:
    Feb 15, 2017
    Messages:
    44
    Likes Received:
    13
    I get very similar perf in mdadm arrays. Problem with mdadm is that I can't do a 2-way mirror with 3 devices.

    Sent from my XT1650 using Tapatalk
     
    #17
  18. TedB

    TedB New Member

    Joined:
    Dec 2, 2016
    Messages:
    18
    Likes Received:
    3
    @gigatexal when was the last time you had a look at storage spaces ? Lot and a mean a lot has changed in last two years even though it was not advertised by MS. I am not a MS evangelist however I can say that currently storage spaces while using mirror (not parity) is rather efficient and stable.
     
    #18
  19. ElBerryKM13

    ElBerryKM13 Member

    Joined:
    Jan 12, 2017
    Messages:
    38
    Likes Received:
    7
    Say your motherboard goes up in smoke and you get a new one. How do you recover your raid in storage spaces?
     
    #19
    Jon Massey likes this.
  20. i386

    i386 Active Member

    Joined:
    Mar 18, 2016
    Messages:
    160
    Likes Received:
    26
Similar Threads: Hardware Software
Forum Title Date
RAID Controllers and Host Bus Adapters To hardware RAID or not to hardware RAID (i.e. software) Mar 12, 2016
RAID Controllers and Host Bus Adapters Software versus Hardware RAID - Plans in Q3 Jun 30, 2013
RAID Controllers and Host Bus Adapters Is there a hardware RAID card that can expand RAID 10 Jan 21, 2017
RAID Controllers and Host Bus Adapters Hardware RAID for ESXi Jul 16, 2016
RAID Controllers and Host Bus Adapters RAID controllers and hardware encryption Sep 16, 2015

Share This Page