Advice on controller for SSD cache + Raid 10 in Win10

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

s3ntro

Member
Apr 25, 2016
56
13
8
39
I'm very new to this so I thought I'd ask the experts instead of throw money away.

I'm currently using a Z68 motherboard running Windows 10 as a NAS. It also runs my security camera software. I've run Xpenology and FreeNAS. Both are fine but the security camera stuff isn't available and driver support for 10G networking (w/Xpenology) is lacking.

I'm using the motherboard's Intel controller for a RAID 10 array, 4x 5TB disks. I'm topping out at about 370MBps read and write. While that's pretty good for write speed, it's about half what read speed should be with 4 drives. The drives are on SATA II ports but each drive can only read and write at around 175 MBps anyway - well below the SATA II limit of 3Gbps. Win-raid forums suggest that the Windows 10 driver is partly to blame as past versions (v11) had much higher speeds. I could go through the effort of re-packing the old driver into the Windows image and reinstall but I'm wondering if there is a better solution.

I'd also like to use a SSD cache for the spinny disks and this isn't possible with consumer gear.

So I was thinking of using something like this: Intel RS3DC040 PCI-Express 3.0 x8 Low Profile Ready SATA / SAS Controller Card - Newegg.com (Intel RS3DC040)

...but I'm an amateur at this stuff. I've read that these Intel controllers don't play nice with many motherboards. The LSI model numbers are confusing as hell. I still have to figure out a cabling solution that can hook up the 4 SATA HDDs and 2 SATA SSDs. I don't want a controller to find a bad sector and kick an entire disk out of the array like it might in a commercial environment. I'm kind of lost.

What would you recommend?

One last piece of info: the box itself is on a UPS but the data being stored on it isn't absolutely critical. I can deal with a power outage that wipes out a transfer. I'm more interested in speed.
 
May 11, 2016
3
2
3
50
Unless you have plans to add more hard drives soon, I'd suggest you consider spending the money on a NVMe SSD and a PCI-E 4x adapter instead and use Windows Storage Spaces to setup a mirrored pool using the NVMe drive as a cache. Use the existing SSDs as the system drives. There are two reasons for this:
  • LSI controllers (including the Intel card you listed which uses a LSI 3108) using CacheCade are limited to 512GB cache per physical card (Adaptec's equivalent MaxCache is 2TB). You may not need more than 512GB but it's nice to have the option to expand.
  • The performance numbers I've seen for CacheCade are good but not impressive and you're still going to be limited to ~550MBps x 2 (best case) with two SATA III SSDs. A Samsung 950 Pro is going to give you 2500 MBps (best case) vs combined 1100 MBps. The Samsung 950 Pro also has lower latency and 300,000 IOPs vs a combined 200,000 (4KB, QD32).
Also, note that most hardware based controllers won't immediately drop a drive if there is a bad sector. The controller will note it and attempt to remap the sector. If it can't remap it or the drive has exceeded a set limit for errors then it gets dropped from the array.

Did you look at ZoneMinder when you were running FreeNAS?
 
  • Like
Reactions: s3ntro

acquacow

Well-Known Member
Feb 15, 2017
784
439
63
42
+1 for storage spaces, forget hardware raid. I have 4 drives setup and get exactly the numbers I should


The drives are hgst 4TB 7200rpm. Each drive individually tests at 150MB/sec read/write. This gets me to 300MB/sec write and 600 read in a raid10-type configuration.
 

s3ntro

Member
Apr 25, 2016
56
13
8
39
Unless you have plans to add more hard drives soon, I'd suggest you consider spending the money on a NVMe SSD and a PCI-E 4x adapter instead and use Windows Storage Spaces to setup a mirrored pool using the NVMe drive as a cache. Use the existing SSDs as the system drives. There are two reasons for this:
  • LSI controllers (including the Intel card you listed which uses a LSI 3108) using CacheCade are limited to 512GB cache per physical card (Adaptec's equivalent MaxCache is 2TB). You may not need more than 512GB but it's nice to have the option to expand.
  • The performance numbers I've seen for CacheCade are good but not impressive and you're still going to be limited to ~550MBps x 2 (best case) with two SATA III SSDs. A Samsung 950 Pro is going to give you 2500 MBps (best case) vs combined 1100 MBps. The Samsung 950 Pro also has lower latency and 300,000 IOPs vs a combined 200,000 (4KB, QD32).
Also, note that most hardware based controllers won't immediately drop a drive if there is a bad sector. The controller will note it and attempt to remap the sector. If it can't remap it or the drive has exceeded a set limit for errors then it gets dropped from the array.

Did you look at ZoneMinder when you were running FreeNAS?
Thanks for your input Mike. I had not seen ZoneMinder. Next time I decide I might want to give FreeNAS a try I'll see how ZoneMinder works for me.

I've never tried Storage Spaces either. I was okay with ~1,100 MBps from two SSDs as that's pretty close to 10G wire speed. Your recommendation sounds like a fit though. I'll start working toward that.
 

s3ntro

Member
Apr 25, 2016
56
13
8
39
This storage spaces stuff is confusing, especially when dealing with it on Win10 vs. Server 2012 or 2016.

Does anyone know if the Win10 variant of Storage Spaces got any of the Server 2016 "Storage Spaces Direct" functionality? If not, it looks like Powershell is necessary to configure the cache.
 

acquacow

Well-Known Member
Feb 15, 2017
784
439
63
42
This storage spaces stuff is confusing, especially when dealing with it on Win10 vs. Server 2012 or 2016.

Does anyone know if the Win10 variant of Storage Spaces got any of the Server 2016 "Storage Spaces Direct" functionality? If not, it looks like Powershell is necessary to configure the cache.
The latest win10 updates seem to have brought it up to feature parity with at least server 2012. I'm unsure about feature parity with 2016.

Only the most basic features are in the win10 gui, if you actually want performance and such, you need to use powershell. There are a lot of code snippets out there, I have some I have written myself as well to generate my config.

Since you have 4 HDDs, here's what I used for my 4:

Code:
#---------------------------------------------
#Creation of 4 HDD 2-column mirror
#---------------------------------------------
Get-PhysicalDisk -CanPool $True | ft FriendlyName,OperationalStatus,Size,MediaType
$pd = (Get-PhysicalDisk -CanPool $True | Where MediaType -NE UnSpecified)
New-StoragePool -PhysicalDisks $pd –StorageSubSystemFriendlyName “Windows Storage*” -FriendlyName “StoragePool”
New-VirtualDisk -StoragePoolFriendlyName "HDDPool" -FriendlyName BulkStorage -ResiliencySettingName Mirror -NumberOfColumns 2 -UseMaximumSize
Get-VirtualDisk BulkStorage | Get-Disk | Initialize-Disk -PartitionStyle GPT
Get-VirtualDisk BulkStorage | Get-Disk | New-Partition -DriveLetter “E” -UseMaximumSize
Initialize-Volume -DriveLetter “E” -FileSystem REFS -Confirm:$false
You can match your device names exactly if you want ala:
Code:
$pd = (Get-PhysicalDisk -CanPool $True | Where FriendlyName -EQ "Fusion ioCache 1200GB")
 
  • Like
Reactions: s3ntro

s3ntro

Member
Apr 25, 2016
56
13
8
39
The latest win10 updates seem to have brought it up to feature parity with at least server 2012. I'm unsure about feature parity with 2016.

Only the most basic features are in the win10 gui, if you actually want performance and such, you need to use powershell. There are a lot of code snippets out there, I have some I have written myself as well to generate my config.

Since you have 4 HDDs, here's what I used for my 4:

Code:
#---------------------------------------------
#Creation of 4 HDD 2-column mirror
#---------------------------------------------
Get-PhysicalDisk -CanPool $True | ft FriendlyName,OperationalStatus,Size,MediaType
$pd = (Get-PhysicalDisk -CanPool $True | Where MediaType -NE UnSpecified)
New-StoragePool -PhysicalDisks $pd –StorageSubSystemFriendlyName “Windows Storage*” -FriendlyName “StoragePool”
New-VirtualDisk -StoragePoolFriendlyName "HDDPool" -FriendlyName BulkStorage -ResiliencySettingName Mirror -NumberOfColumns 2 -UseMaximumSize
Get-VirtualDisk BulkStorage | Get-Disk | Initialize-Disk -PartitionStyle GPT
Get-VirtualDisk BulkStorage | Get-Disk | New-Partition -DriveLetter “E” -UseMaximumSize
Initialize-Volume -DriveLetter “E” -FileSystem REFS -Confirm:$false
You can match your device names exactly if you want ala:
Code:
$pd = (Get-PhysicalDisk -CanPool $True | Where FriendlyName -EQ "Fusion ioCache 1200GB")
Thank you! The storage pool setup seems pretty straightforward. It's the cache drive stuff that remains a question mark for me. It looks like I have to use two SSDs (or PCI NVMe) rather than one, but it doesn't look like I can stripe across them because the parity settings have to match the HDD array. The documentation on Storage Spaces Direct makes this look a lot more flexible which is why I asked. Then it's unclear if I should go with the standard cache setup or modify it to be larger, and whether to use it as a journal drive or not.