Server 2012 R2: Storage Spaces

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

awedio

Active Member
Feb 24, 2012
776
225
43
To all the STH'ers (dba, PigLover etc) that have tested / played with SS...

As of today, how do you feel about SS?

- Is it ready for Prime Time
- Is it worth "investing" in
- Should one be looking at other alternatives like gea's Napp-IT

Emotional & technical responses are welcome ;)
 

PigLover

Moderator
Jan 26, 2011
3,186
1,546
113
My opinion only:

- Yes, it is ready for prime time
- Yes, if your use case fits it is worth "investing" in (though I am not totally sure what "investing" in means to you here)
- Yes, you want to look at ZFS and other alternatives: there is no universally ideal storage solution out there.

A bit more...

Storage spaces is stable and functional. It does what it claims very effectively. But you need to do some homework to understand what those claims are, whether they fit your application and how to manage it before you jump in. If you don't do this you will probably stumble and be disappointed with the outcomes. Of course - that statement is true of just about any storage management solution.

So what does it do well?
- It integrates with the Windows Network File Sharing (SMB) model better than any other current option.
- Because of this you get the advantages of SMB-multichannel and SMB-direct for significant file sharing bandwidth improvements in windows-windows sharing environments.
- It integrates well with Hyper-V
- Simple (Raid0) and Striped (Raid1) volumes perform very well.
- Parity (Raid5/6) perform adequately when SSD-based journal volumes are included.
- It integrates well with Windows-server iSCSI for serving file-based (.vhdx) volumes.
 

awedio

Active Member
Feb 24, 2012
776
225
43
- Yes, if your use case fits it is worth "investing" in (though I am not totally sure what "investing" in means to you here)
"Investing" as in learning/diving into the technology & spending $$ on the necessary hardware
 

awedio

Active Member
Feb 24, 2012
776
225
43
I'd say that if you avoid parity layouts, you'll be very happy. I don't have enough experience with the clustering to have an opinion.
dba, you running SS on your production boxes?
 

dba

Moderator
Feb 20, 2012
1,477
184
63
San Francisco Bay Area, California, USA
dba, you running SS on your production boxes?
I do. I use it primarily for SMB3 serving of files and VM disks with 10GbE and 32Gb/s IPoIB connections - high throughput and IOPS, but not that many simultaneous clients. The killer features - for me at least - are SMB multichannel and SMB-Direct. Without those, I probably would have stuck with ZFS. Because of those two features, however, there isn't anything else around that comes close if you need speed and low latency.
 
Last edited:

cesmith9999

Well-Known Member
Mar 26, 2013
1,422
478
83
Clustering works, getting the volumes mounted and formatted takes a little bit of work.

The biggest thing to worry about is to still proper pick the disks that you need for the task. I am seeing too many people use SATA disks where they should be using SAS.

I have 40 servers using Storage Spaces and we have a 300 TB daily ingestion rate. SATA is not keeping up.
 
Last edited:

dba

Moderator
Feb 20, 2012
1,477
184
63
San Francisco Bay Area, California, USA
Clustering works, getting the volumes mounted and formatted takes a little bit of work.

The biggest thing to worry about is to still properly pick the disks that you need for the task. I am seeing too many people use SATA disks where they should be using SAS.

I have 40 servers using Storage Spaces and we have a 300 TB daily ingestion rate. SATA is not keeping up.
That's great. Tell us more! That's over 3,600MB/s average, which probably means that the peaks are quite a bit higher. Please do share some details about the number of servers, disks, controllers, network, etc. Are you throughput our IOPS limited?
 

PigLover

Moderator
Jan 26, 2011
3,186
1,546
113
...for me at least - are SMB multichannel and SMB-Direct. Without those, I probably would have stuck with ZFS. Because of those two features, however, there isn't anything else around that comes close if you need speed and low latency.
This. Pretty much the complete story on why SS.

True with high-end transport (IB or even 10Gbe) and true at the low-end if you just want the simplest way to maximize Nx1Gbe NICs. I'm getting consistent 400MBs transfers off an Avoton-based server with no hassle or complication to set up. 4x 1Gbe built in NICs hitting link saturation with ease.

If somebody built ZFS on Windows or if the Samba group got SMB3 with multichannel and SMBdirect working I'd go back to ZFS in a flash. But until they do its Storage Spaces.
 

Andyreas

Member
Jul 26, 2013
50
4
8
Sweden
I just have to ask why you must use SS to make use of SMB Multichannel and SMB-Direct? Why couldnt you just use the built in raidcard and then make use of SMB Multi+Direct? I mean alot of people in here seems to be running on servers with built in raid cards. These numbers are my own benchmarks in CrystalDiskMark when comparing Raid10 to SS Mirror. Raid10 was a bit faster in sequential speeds but except from that they perform quite similar.

Crystal D on Raid10
Sequential Read : 784.862 MB/s
Sequential Write : 418.037 MB/s
Random Read 512KB : 688.133 MB/s
Random Write 512KB : 565.760 MB/s
Random Read 4KB (QD=1) : 22.569 MB/s [ 5509.9 IOPS]
Random Write 4KB (QD=1) : 42.613 MB/s [ 10403.5 IOPS]
Random Read 4KB (QD=32) : 170.178 MB/s [ 41547.4 IOPS]
Random Write 4KB (QD=32) : 51.680 MB/s [ 12617.1 IOPS]
Test : 1000 MB [D: 1.3% (23.7/1788.4 GB)] (x5)

CrystalD on Storage Spaces Mirror
Sequential Read : 488.315 MB/s
Sequential Write : 428.223 MB/s
Random Read 512KB : 441.561 MB/s
Random Write 512KB : 479.638 MB/s
Random Read 4KB (QD=1) : 19.592 MB/s [ 4783.3 IOPS]
Random Write 4KB (QD=1) : 32.915 MB/s [ 8035.9 IOPS]
Random Read 4KB (QD=32) : 187.476 MB/s [ 45770.5 IOPS]
Random Write 4KB (QD=32) : 41.221 MB/s [ 10063.8 IOPS]
Test : 1000 MB [D: 0.0% (0.2/1785.9 GB)] (x5)
 

dba

Moderator
Feb 20, 2012
1,477
184
63
San Francisco Bay Area, California, USA
You could just use a RAID card on Windows, but with Storage Spaces you get pooling and thin provisioning, very easy SSD tiering and caching, clustering, scale-out, and of course the ability to create a volume that spans more than one disk controller.

I just have to ask why you must use SS to make use of SMB Multichannel and SMB-Direct? Why couldnt you just use the built in raidcard and then make use of SMB Multi+Direct? I mean alot of people in here seems to be running on servers with built in raid cards. These numbers are my own benchmarks in CrystalDiskMark when comparing Raid10 to SS Mirror. Raid10 was a bit faster in sequential speeds but except from that they perform quite similar.

Crystal D on Raid10
Sequential Read : 784.862 MB/s
Sequential Write : 418.037 MB/s
Random Read 512KB : 688.133 MB/s
Random Write 512KB : 565.760 MB/s
Random Read 4KB (QD=1) : 22.569 MB/s [ 5509.9 IOPS]
Random Write 4KB (QD=1) : 42.613 MB/s [ 10403.5 IOPS]
Random Read 4KB (QD=32) : 170.178 MB/s [ 41547.4 IOPS]
Random Write 4KB (QD=32) : 51.680 MB/s [ 12617.1 IOPS]
Test : 1000 MB [D: 1.3% (23.7/1788.4 GB)] (x5)

CrystalD on Storage Spaces Mirror
Sequential Read : 488.315 MB/s
Sequential Write : 428.223 MB/s
Random Read 512KB : 441.561 MB/s
Random Write 512KB : 479.638 MB/s
Random Read 4KB (QD=1) : 19.592 MB/s [ 4783.3 IOPS]
Random Write 4KB (QD=1) : 32.915 MB/s [ 8035.9 IOPS]
Random Read 4KB (QD=32) : 187.476 MB/s [ 45770.5 IOPS]
Random Write 4KB (QD=32) : 41.221 MB/s [ 10063.8 IOPS]
Test : 1000 MB [D: 0.0% (0.2/1785.9 GB)] (x5)
 

cesmith9999

Well-Known Member
Mar 26, 2013
1,422
478
83
I just inherited this configuration a few weeks ago.

We inject 300 TB of data a day, we also delete 300 TB of data a day. This is a very complicated file server farm.

Each of these servers (dell R720) has a single LSI 9205 connected to a RAIDINC JBOD with 60 * 3 TB disks. Connected with a single 10 GB connection. Each server is standalone. There is no clustering in this set of servers. This part of the configuration has 3.2 PB of exposed storage (each server has 20 * 4 TB VDisks).

Some of the servers are running 2012, some are running 2012 R2.

Some servers have 1 * 60 disk storage pool some have 5 * 12 disks pools. Don't ask why. This is a very heated discussion.

I have helped a co-worker configure his production Storage Spaces Cluster (3 nodes - 6 enclosures * 24 * 900 GB disks). He has not complained to me about performance on his cluster. with his configuration we are using the -isenclosureaware switch when using new-virtualdisk
 

dba

Moderator
Feb 20, 2012
1,477
184
63
San Francisco Bay Area, California, USA
If the disks can't keep up, then I'm guessing it's an IOPS limitation? If so then you are right: SAS might be good for 2x IOPS. Does write caching to SSD help your workload any?

I just inherited this configuration a few weeks ago.

We inject 300 TB of data a day, we also delete 300 TB of data a day. This is a very complicated file server farm.

Each of these servers (dell R720) has a single LSI 9205 connected to a RAIDINC JBOD with 60 * 3 TB disks. Connected with a single 10 GB connection. Each server is standalone. There is no clustering in this set of servers. This part of the configuration has 3.2 PB of exposed storage (each server has 20 * 4 TB VDisks).

Some of the servers are running 2012, some are running 2012 R2.

Some servers have 1 * 60 disk storage pool some have 5 * 12 disks pools. Don't ask why. This is a very heated discussion.

I have helped a co-worker configure his production Storage Spaces Cluster (3 nodes - 6 enclosures * 24 * 900 GB disks). He has not complained to me about performance on his cluster. with his configuration we are using the -isenclosureaware switch when using new-virtualdisk
 
Last edited:

cesmith9999

Well-Known Member
Mar 26, 2013
1,422
478
83
There are no SSD's in the mix. That is one of the items I am bringing up as we look how we want to reconfigure these servers. These servers were deployed as 2012 before R2 was available.

Yes, we are very IOP bound.
 

dba

Moderator
Feb 20, 2012
1,477
184
63
San Francisco Bay Area, California, USA
There are no SSD's in the mix. That is one of the items I am bringing up as we look how we want to reconfigure these servers. These servers were deployed as 2012 before R2 was available.

Yes, we are very IOP bound.
I assume that you have investigated tuning number of columns and interleave. If so, then I wonder if adding SSD write caching will smooth out the IO enough to help. It would also allow Windows to do some software write coalescing if SS has that ability. Add some more disks to increase the available IOPS, and maybe you don't need to toss out all of those SATA disks after all, saving some $.

Enclosure awareness is a very smart feature. Oracle has something similar in ASM, even more flexible actually, and I use it extensively. For others: If you have multiple disk enclosures, SS can optionally use enclosure awareness to ensure that a given write to disk gets sent to both enclosures, protecting against an HBA failure, a cable unplug, or even an enclosure failure.

Sounds like a nice project, by the way. Enjoy!
 
Last edited:

cesmith9999

Well-Known Member
Mar 26, 2013
1,422
478
83
There has been some investigation. Unfortunately they came to a bad premise and conclusion. This is something that I am working with them politically about.

There are no plans to throw away the JBOD's. If I had my way I would buy a 2nd jbod (40 * 3 TB disks and 20 * 480 GB SAS SSD) and new HBA's and do tiered spaces.

With Piglover's wonderful walk through of how to setup Tiering with SS, I converted my home server to a dual redundant parity Storage Spaces configuration and I could not be happier at home.