Poor Performance with 24x8TB Drive with Storage Spaces

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Mikhail

New Member
Feb 15, 2017
22
0
1
32
So, I went to build a BackBlaze Storage Pod 6.0, with a few twists including getting 256GB of RAM and 2 CPUs + 10G + Server 2016.
Open Source Storage Server: 60 Hard Drives 480TB Storage

As I'm getting this system online I wanted to compare the performance to a similar Adaptec Raid 6 system, also with 24 drives.

And was horrified by the poor performance:

Here is what I get with Storage Spaces + parity
New-VirtualDisk -FriendlyName "datastore" -StoragePoolFriendlyName "datastore1" -UseMaximumSize -ProvisioningType Fixed -ResiliencySettingName Parity


And here is what I get with Storage Spaces without parity
PS C:\Users\Administrator> New-VirtualDisk -FriendlyName "datastore" -StoragePoolFriendlyName "datastore1" -UseMaximumSize -ProvisioningType Fixed -ResiliencySettingName Simple



The drives are Seagate 8TB BarraCuda Pro SATA 6Gb/s 256MB Cache 3.5-Inch Internal Hard Drive (ST8000DM005), with do sequential write at ~125 MB/s.

Previous builds with 24 drives and an Adaptec 8805 + Intel Splitter have pushed past 2000MB/s (when new).

How do I make these drives run faster on Windows with some kind of parity/fault tolerance similar to RAID6? Also why is storage spaces so darn slow?
 

Attachments

i386

Well-Known Member
Mar 18, 2016
4,250
1,548
113
34
Germany
(Sadly) Microsofts implementation of parity spaces sucks. The read speed are okay, but the write speeds are terrible.

You can bump the performance with n*3 ssds for dual parity spaces (n*2 for single parity), but even then the write speed will go down to the low numbers when the cache is filled up.
 

sparx

Active Member
Jul 16, 2015
320
118
43
Sweden
Try running ZFS perhaps? You have more than plenty of RAM.
Strike that. You said windows. oops.
 

Mikhail

New Member
Feb 15, 2017
22
0
1
32
(Sadly) Microsofts implementation of parity spaces sucks. The read speed are okay, but the write speeds are terrible.

You can bump the performance with n*3 ssds for dual parity spaces (n*2 for single parity), but even then the write speed will go down to the low numbers when the cache is filled up.
Yeah, I'd like to point out that even without parity its 2x worse.
 

maze

Active Member
Apr 27, 2013
576
100
43
Is there a reason you absolutely need windows? - does the server need to do anything other than be a storage location?
 

ServerSemi

Active Member
Jan 12, 2017
131
34
28
I get 800mb/800mb dl/ul with my adaptec 8qz card using 10 hgst/wd red drives in raid 6. Parity spaces was giving me really slow write speeds like yours thats why I decided on getting a raid card.
 

psannz

Member
Jun 15, 2016
79
19
8
39
Deploying Parity on Windows Storage Spaces (WSS) *without* Write Back Cache (WBC) makes you a masochist. At best.
You need to understand that WSS not only disables disk caches, but also elects not to use system RAM, to ensure data integrity. Therefore, writes are directly written, without putting it in an optimized order, as a HW RAID Controller with its own cache would. And that just kills Write IOPS. However, once WSS commited the write to the WBC, it is free to optimize the writes on the slow disks, thus improving performance manyfold.

Fujitsu published this whitepaper on the topic:
https://sp.ts.fujitsu.com/dmsp/Publ...ndows-storage-spaces-r2-performance-ww-en.pdf

TL:DR: Eyeball the linked whitepaper's graphs on page 30-32, and go buy a pair/tripplet of nice write-optimized ssds. The new Intel Optane disks would fit that bill perfectly, or go for a few of ZeusRAM.
 

Mikhail

New Member
Feb 15, 2017
22
0
1
32
Nice find!

If you look at the graph
Spaces Dual Parity vs. Spaces Dual Parity mit WBC vs. HW RAID 6

The “Restore” (sequential access, 100% write, 64 kB block size). Is what I'm testing.

"Dual Parity - 8 HDDs + 1 GB WBC" get around 250 MB/s
"HW-RAID6 - 8 HDDs" gets around 1250 MB/s

Wow, so I think WBC caching helps but 250 MB/s, from a 24 drive array is kinda really messed up.

At this point i'm going through the stages of grief. I think I'm past "Shock and denial" about how much Storage Spaces sucks, probably going to move onto "Anger" soon.
 

i386

Well-Known Member
Mar 18, 2016
4,250
1,548
113
34
Germany
At page 10:
For performance reasons it is recommended to enable the hard disk cache for every hard disk. All the measurements in this document were carried out with enabled hard disk caches. In order to minimize the risk of data loss during productive operation in the case of a power failure the use of a UPS (uninterruptible power supply) is strongly recommended. To be able to completely rule out the loss of data in the cache all
hard disk caches would have to be disabled.
This explains why I had even lower write speeds, the cache on the hdds were disabled in my tests.