High performance DAS - NVMe or SSD

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Stril

Member
Sep 26, 2017
191
12
18
41
Hi!

I need to build a Windows-2016-Server with a VERY fast DAS Storage. I need at least 8 TB of storage.

Option 1:
4x Intel DC P4600 4 TB
Raid 10 - Windows-Software-Raid?

Option 2:
32x Intel DC S4600 480 GB
Raid 10 on Avago 9361-8i?


What would you prefer?

I did never use a Raid-System with NVMe-Cards. Is this possible? I will loose hotswap-capability, but should get better performance, or am I wrong?

Thank you for your help!

Stril
 

i386

Well-Known Member
Mar 18, 2016
4,221
1,540
113
34
Germany
Define "fast", low latencies or high throuhput? :D

I would (try) to go with less devices so option 1, but without more informations about the use cases it's just a preference.

Windows-Software-Raid?
If you want to use mirrors ("raid 10") you could look at storage spaces, it's microsofts version of software defined storage.

I did never use a Raid-System with NVMe-Cards. Is this possible? I will loose hotswap-capability, but should get better performance, or am I wrong?
It's possible to use raid for nvme devices, most solutions use software raid but there are also a few hardware solutions. Like with hdds performance depends about what raid type you use (mirror, striping or parity), how many devices you have in a logical devices and so on.

How swapping is possible if your chassis/backplane, mainboard and os/raid software support it.
 

Stril

Member
Sep 26, 2017
191
12
18
41
Hi!

I need low latency and high IOPS.

@StorageSpaces:
I would use StorageSpaces as Raid 10. I do not like parity...

Did you ever test the performance of systems like these?
 

BackupProphet

Well-Known Member
Jul 2, 2014
1,083
640
113
Stavanger, Norway
olavgg.com
I've done some benchmarking with Windows 2012 software raid, mirrors and stripes. The performance is not really great, you are in fact better off using some kind of hardware raid, or a SAN. It may be better in 2016, but I would seriously consider a hardware raid alternative.

Another thing, Intel has that Optane Cache software, I'm not sure if this works on server grade hardware yet, but I'm pretty sure by that using that you get high and stable iops and low latency.
 

i386

Well-Known Member
Mar 18, 2016
4,221
1,540
113
34
Germany
I've done some benchmarking with Windows 2012 software raid, mirrors and stripes.
Software raid or storage spaces?
With storage spaces + refs microsft had two major updates since server 2012 (refs 1.x -> 2.x -> 3.x) changing a lot of stuff.
 

Stril

Member
Sep 26, 2017
191
12
18
41
Hi!

Thats why I am asking. I do not trust in the new hardware-raid-controller for NVMe. So, my option would be StorageSpaces with ReFS.

Did anybody here ever use this combination in a high-performance environment like mine?
 

BackupProphet

Well-Known Member
Jul 2, 2014
1,083
640
113
Stavanger, Norway
olavgg.com
Software raid or storage spaces?
With storage spaces + refs microsft had two major updates since server 2012 (refs 1.x -> 2.x -> 3.x) changing a lot of stuff.
I have not tested storage spaces, but if it is now at Linux / ZFS performance level, then that is good news. A hardware raid card is still faster though, at least for burst io.
 

wvaske

New Member
Apr 12, 2017
8
2
3
I do performance testing on all-flash SDS solutions on Microsoft. Mirrors in storage spaces with flash scale to higher performance than hardware raid options using the same devices (SAS or SATA -- I haven't tested the NVMe hardware RAID but I would expect them to limit performance over what Storage Spaces will offer).

I'm currently testing a Storage Spaces Direct configuration (distributed storage). 32x SATA drives across 4 nodes with 3-way mirroring hits 1.5 million 4k Random Read IOPS. 10GB/s large block reads.

I've done testing with NVDIMMs and NVMe drives as well and Storage Spaces gives phenomenal performance.
 
  • Like
Reactions: T_Minus

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,625
2,043
113
I do performance testing on all-flash SDS solutions on Microsoft. Mirrors in storage spaces with flash scale to higher performance than hardware raid options using the same devices (SAS or SATA -- I haven't tested the NVMe hardware RAID but I would expect them to limit performance over what Storage Spaces will offer).

I'm currently testing a Storage Spaces Direct configuration (distributed storage). 32x SATA drives across 4 nodes with 3-way mirroring hits 1.5 million 4k Random Read IOPS. 10GB/s large block reads.

I've done testing with NVDIMMs and NVMe drives as well and Storage Spaces gives phenomenal performance.
Do companies often deploy 32+ unit SATA SSD storage spaces setups or is this rare?
 

wvaske

New Member
Apr 12, 2017
8
2
3
Do companies often deploy 32+ unit SATA SSD storage spaces setups or is this rare?
32 drives (8 per node in a 4-node configuration) is a reasonably common configuration. Most customers I talk to are deploying hybrid configurations with 2-4 SSD cache drives (NVMe, SAS, or SATA) and 8-12 7.2k HDDs for capacity (per node).
 

azev

Well-Known Member
Jan 18, 2013
768
251
63
@wvaske do you mind sharing your configuration to get that kind of performance with storage spaces ?
 

wvaske

New Member
Apr 12, 2017
8
2
3
@wvaske do you mind sharing your configuration to get that kind of performance with storage spaces ?
Just to be clear, this is storage spaces direct--the distributed storage solution that Microsoft is doing these days. It's a 4-node configuration (2-socket Intel 6142 or 6148 with 384GB memory).
Each node is using 8x read-intensive Enterprise SATA SSDs.
Networking is 100Gb Mellanox for RDMA support.

To get the performance mentioned, I need a large number of VMs with each running workload. (number of VMs equal to the number of hyper-threaded cores. That's 64-VMs per host with the 6142s.
I can hit 1m Random Read IOPS with 20 VMs per host and medium queue depth workloads (QD = 4-8) on each VM. (Each VM is using a single thread to run workload. In general, I see equivilent performance with fewer big VMs that use many threads vs more small VMs with few/single threads).

A full write up will show up here in the next month or so: Storage
 

MiniKnight

Well-Known Member
Mar 30, 2012
3,072
973
113
NYC
Instead of traditional DAS, 100Gbe NVMe over fabrics. You can start direct attach then move it on network and use with others later.
 
  • Like
Reactions: wvaske

Evan

Well-Known Member
Jan 6, 2016
3,346
598
113
Do companies often deploy 32+ unit SATA SSD storage spaces setups or is this rare?
I won’t say that SATA is that common but SAS yes absolutely and I can’t see why you couldn’t use SATA, I would certianly consider it if I was building certain systems where power or simplicity was most important, not needing a SAS adapter can really work for some people.
 

wvaske

New Member
Apr 12, 2017
8
2
3
Here's some data. I have a SuperMicro server with 2x Intel Xeon Gold 6142 (15c @ 2.6Ghz) with 384GB of memory. I have a 24-port LSI HBA with 8x Read Intensive SATA drives and 8x Write Intensive SATA Drives (No expander). I also have 4x Read Intensive NVMe drives. (All drives are enterprise SSDs with full power protection from a major manufacturer)

I have the SATA drives configured with Storage Spaces with 3-way mirroring and the NVMe drives configured with 2-way mirroring (not enough drives for 3-way). The virtual disks are thick provisioned and the drives were all secure erased right before volume creation. All volumes use ReFS with default allocation size.

8x Write Intensive SATA Drives:
upload_2018-5-10_15-33-40.png

8x Read Intensive SATA Drives:
upload_2018-5-10_15-39-52.png

4x Read Intensive NVMe Drives:
upload_2018-5-10_15-44-34.png


I also did one final test with all 16x SATA drives in a single pool. Storage Spaces will only use a capacity on each drive equal to the capacity of the smallest drive so the 4TB RI drives will look like 2TB drives. This is to see the impact of 'spindle count' on the performance numbers.
16x SATA Drives:
upload_2018-5-10_16-7-21.png

So large block scales with dirves, small block really doesn't unless you have enough threads to do something with it.