Server 2012 R2, Storage Spaces and Tiering

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

nickveldrin

New Member
Sep 4, 2013
23
3
3
Refs for me and it's great, a whole lot faster than it was in 2012r2

Sent from my SM-N920T using Tapatalk
Yeah, i've heard really good things about ReFS in 2016, but haven't tried yet. Unfortunately, i won't be able to with any big storage box, so it'll just be small scale testing 2016 features on one c6100 blade.

I have two main storage boxes, one running FreeNAS (BSD's ZFS) and one running Oracle Solaris (Oracle's ZFS), as well as a Synology, running embedded Linux, that does LVM on with MD underneath.

If after launch benchmarks come out to show ReFS a real contender in Storage Spaces compared to ZFS, i'll think about migrating my VM data over to it and converting my Solaris box, but the Solaris box is pretty rock solid and works incredibly well, so it's a hard sell for me at the moment.
 

Chuntzu

Active Member
Jun 30, 2013
383
98
28
Storage spaces direct is great, I am setting up tp5 soon to test the improvements. I like sticking with Windows for the whole stack since windows networking to other windows boxes is stupid fast. I have hit 2million 4k iops read and write with a single node to another over smb 3.0 and iscsi initial tests show pretty good speeds as well, mind you not as fast as smb 3.0 but still good enough for all my iscsi boot machines saturate the underlying storage of the iscsi target machine (ie 4gigabytes per second read a and 2gigabytes per second writes of the iscsi box I was using)

Sent from my SM-N920T using Tapatalk
 
  • Like
Reactions: gigatexal

gigatexal

I'm here to learn
Nov 25, 2012
2,913
607
113
Portland, Oregon
alexandarnarayan.com
Storage spaces direct is great, I am setting up tp5 soon to test the improvements. I like sticking with Windows for the whole stack since windows networking to other windows boxes is stupid fast. I have hit 2million 4k iops read and write with a single node to another over smb 3.0 and iscsi initial tests show pretty good speeds as well, mind you not as fast as smb 3.0 but still good enough for all my iscsi boot machines saturate the underlying storage of the iscsi target machine (ie 4gigabytes per second read a and 2gigabytes per second writes of the iscsi box I was using)

Sent from my SM-N920T using Tapatalk
yeah this needs a writeup for sure
 

Chuntzu

Active Member
Jun 30, 2013
383
98
28
Sure it's going to be a little while, have 4 back to back 14 hour days....:-( and then 1 day off then 4 more of the same....damn real life getting in the way.

Sent from my SM-N920T using Tapatalk
 
  • Like
Reactions: gigatexal

Mirabis

Member
Mar 18, 2016
113
6
18
30
I used a NVME Mirror (P3700) in Server 2016 and achieved 1GB/s speeds (awsum).... and then... downgraded to Server 2012 R2 (compatibility).... put the 2x NVMe in a Tiered Storage Space with 6x 7200RPM 2TB Spindles... lol... max 108MB/s grrr... such a waste of performance >.< But without the Tier i lack space...
 

Mirabis

Member
Mar 18, 2016
113
6
18
30
there have been many performance enhancements with 2016...

Chris
Yeah but I had some troubles with '0bytes free on disks' in server 2016 :(. And I can't use Veeam to backup my vm's on server 2016 (and other products who don't work on 2016) hehe. Will have to wait for 2016 official release :/


Sent from my iPhone using Tapatalk
 

amnesia1187

New Member
Jul 8, 2016
6
1
3
36
Bellevue, WA
I used a NVME Mirror (P3700) in Server 2016 and achieved 1GB/s speeds (awsum).... and then... downgraded to Server 2012 R2 (compatibility).... put the 2x NVMe in a Tiered Storage Space with 6x 7200RPM 2TB Spindles... lol... max 108MB/s grrr... such a waste of performance >.< But without the Tier i lack space...
Isn't that partly due to the number of columns here>? 6 disks by themselves would give you 3, but tiered storage will use the lowest tier column count, IE 1, so no striping despite 6 disks. I'm gonna be messing with this soon when my disks arrive. I was gonna try partitioning the disks or mouting VHDs to try and trick it :p. I'll gladly sacrifice some capacity for perf.
 
  • Like
Reactions: Chuntzu

Mirabis

Member
Mar 18, 2016
113
6
18
30
Isn't that partly due to the number of columns here>? 6 disks by themselves would give you 3, but tiered storage will use the lowest tier column count, IE 1, so no striping despite 6 disks. I'm gonna be messing with this soon when my disks arrive. I was gonna try partitioning the disks or mouting VHDs to try and trick it :p. I'll gladly sacrifice some capacity for perf.
Yeah 2016 allowed mixing the column count for hdd/ssd tier. In 2012 I'm limited to the column count of my NVME disks :(




Sent from my iPhone using Tapatalk
 
  • Like
Reactions: Chuntzu

amnesia1187

New Member
Jul 8, 2016
6
1
3
36
Bellevue, WA
Yeah 2016 allowed mixing the column count for hdd/ssd tier. In 2012 I'm limited to the column count of my NVME disks :(




Sent from my iPhone using Tapatalk
Ooooh, was that just with Storage Spaces Direct? or is there actually more functionality to normal spaces too in 2016? I havent had a chance to play with it yet.
 

Gangz777

New Member
Dec 5, 2017
2
0
1
33
First two benchmarks are just setting some baselines. Test the speeds on a single SSD connected to the M1015 and a single HDD on the expander. Knowing what each drive can do by itself should help set expectations for the rest of the tests.

Here's the Samsung 840 Pro. Pretty much looks like every other benchmark of these drives. Good and fast.


And here's a single Hitachi 2TB 7200. Again - looks just about like it should. No I'll effects from the borked expander when looking at one drive at a time:


Hi, we built an array of 3 SSD's in Parity with Storage Spaces and got quite a strange results.
I wanted to compare with yours, but, unfortunately, the pictures are not displayed.

I understand that the post is very old, but do you still have test results?
 

PigLover

Moderator
Jan 26, 2011
3,190
1,549
113
No. Don't believe id be able to find that. Sorry.

Sent from my VS996 using Tapatalk
 

Gangz777

New Member
Dec 5, 2017
2
0
1
33
What were your results?
---------------------------------------------------------------------—
CrystalDiskMark 5.2.2 x64 (C) 2007-2017 hiyohiyo
Crystal Dew World : Crystal Dew World
—---------------------------------------------------------------------
* MB/s = 1,000,000 bytes/s [SATA/600 = 600,000,000 bytes/s]
* KB = 1000 bytes, KiB = 1024 bytes

Sequential Read (Q= 32,T= 1) : 763.950 MB/s
Sequential Write (Q= 32,T= 1) : 180.855 MB/s
Random Read 4KiB (Q= 32,T= 1) : 131.846 MB/s [ 32189.0 IOPS]
Random Write 4KiB (Q= 32,T= 1) : 30.803 MB/s [ 7520.3 IOPS]
Sequential Read (T= 1) : 562.757 MB/s
Sequential Write (T= 1) : 7.340 MB/s
Random Read 4KiB (Q= 1,T= 1) : 19.698 MB/s [ 4809.1 IOPS]
Random Write 4KiB (Q= 1,T= 1) : 17.858 MB/s [ 4359.9 IOPS]

Test : 16384 MiB [F: 0.0% (0.2/1781.9 GiB)] (x5) [Interval=5 sec]
Date : 2017/12/04 19:54:35
OS : Windows Server 2012 R2 Server Standard (full installation) [6.3 Build 9600] (x64)

For 3 SSD in Parity with default settings of SS and 64K Formatting of volume.

Some results are better, then on HW Controller, but, without WB cache Sequential Write (T=1) is very bad.