Storage Spaces Direct platform

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

KamiCrazy

New Member
Apr 13, 2013
23
3
3
Looking to create a new Hyper-converged Storage Spaces Direct cluster and I'm looking for a second opinion on hardware choices.

This is what I am thinking of at the moment.

Base platform of Supermicro 1028R-WTNRT, it has 2x NVMe slots and 8 hot swap drive bays for SSDs
E5-2620 v4 CPU's - they seem to be the best value
128 GB worth of 32GB RDIMMS
2x Intel 750 400GB NVMe drives
4x Samsung 850 Evo 2TB SSDs
Adaptec 1000-8i8e HBA
Mellanox ConnectX-3 Infiniband adapters

This seems to cover all the minimum S2D requirements for an all flash setup. It leaves plenty of room for future upgrades, theres potential for 512GB of ram and 16 TB of raw SSD space per node to scale up.
With the Adaptec HBA there is also the capability to continue to add disk space through an external JBOD.
 

Deslok

Well-Known Member
Jul 15, 2015
1,122
125
63
34
deslok.dyndns.org
I would probably look at PM863 drives instead of the 850 evo from an enterprise perspective, are you running the infiniband at QDR?
 

Evan

Well-Known Member
Jan 6, 2016
3,346
598
113
I would probably look at PM863 drives instead of the 850 evo from an enterprise perspective, are you running the infiniband at QDR?
And they come in 3.84tb versions if you wanted to make better use of your available slots.
 

felmyst

New Member
Mar 16, 2016
27
6
3
28
pricing. consider pricing. your setup will be almost 9k$ per node + S2D requires WS2016 Datacenter edition, so add +4.5k$ for every node in S2D setup.
traditional shared-JBOD storage spaces setup will be much cheaper.
I would probably look at PM863 drives instead of the 850 evo from an enterprise perspective, are you running the infiniband at QDR?
using 850 evo is OK since it's capacity tier. also they're quite fast so read IO from capacity tier should be no problem.
 

KamiCrazy

New Member
Apr 13, 2013
23
3
3
I would probably look at PM863 drives instead of the 850 evo from an enterprise perspective, are you running the infiniband at QDR?
PM863 drives push the project out of budget unfortunately. Using the 850 EVO is a bit of a gamble but one I am willing to take. Sometimes taking such risks has paid off like when I bought HP 3Par Optimus SSDs for SOFS. I will indeed be using QDR infiniband. I've got my eye on those 4036E switches.

pricing. consider pricing. your setup will be almost 9k$ per node + S2D requires WS2016 Datacenter edition, so add +4.5k$ for every node in S2D setup.
traditional shared-JBOD storage spaces setup will be much cheaper.
I haven't priced everything down yet but dual processor, 128GB of ram, 2 nvme drives, 4 SSD, infiniband card, SAS hba comes out to ~7.5K for hardware. This isn't counting switches, cables etc which of course push up the price.

We're already running a JBOD storage spaces setup. It works but it's not cost effective. The JBOD themselves are atrociously expensive for a hunk of metal and the drives are insane especially SSDs. If anyone asked me whether it made any sense to go traditional JBOD method I would flatly say no unless of course datacenter edition was a problem.
 

Deslok

Well-Known Member
Jul 15, 2015
1,122
125
63
34
deslok.dyndns.org
PM863 drives push the project out of budget unfortunately. Using the 850 EVO is a bit of a gamble but one I am willing to take. Sometimes taking such risks has paid off like when I bought HP 3Par Optimus SSDs for SOFS. I will indeed be using QDR infiniband. I've got my eye on those 4036E switches.



I haven't priced everything down yet but dual processor, 128GB of ram, 2 nvme drives, 4 SSD, infiniband card, SAS hba comes out to ~7.5K for hardware. This isn't counting switches, cables etc which of course push up the price.

We're already running a JBOD storage spaces setup. It works but it's not cost effective. The JBOD themselves are atrociously expensive for a hunk of metal and the drives are insane especially SSDs. If anyone asked me whether it made any sense to go traditional JBOD method I would flatly say no unless of course datacenter edition was a problem.
Where are you getting your 850's then because i'm shopping the wrong place, the 863 looks attractive on ebay at ~600-700 for a 2tb ...
 

DBordello

Member
Jan 11, 2013
43
0
6
pricing. consider pricing. your setup will be almost 9k$ per node + S2D requires WS2016 Datacenter edition, so add +4.5k$ for every node in S2D setup.
traditional shared-JBOD storage spaces setup will be much cheaper.

using 850 evo is OK since it's capacity tier. also they're quite fast so read IO from capacity tier should be no problem.
I have been keeping an eye on storage spaces direct. It seems like a low cost way to achieve shared storage for a smaller operation.

Are you sure it requires a Datacenter license? That would be a deal killler.
 

KamiCrazy

New Member
Apr 13, 2013
23
3
3
Where are you getting your 850's then because i'm shopping the wrong place, the 863 looks attractive on ebay at ~600-700 for a 2tb ...
I should be asking you where you get your PM863's? $600 for 2TB? That's crazy cheap if it is new.
 

Deslok

Well-Known Member
Jul 15, 2015
1,122
125
63
34
deslok.dyndns.org

DBordello

Member
Jan 11, 2013
43
0
6
Wow, that is brutal. It seems like a great solution for the little guy. One of the advantages is to avoid the NAS cost. However, if the license is $5k, ouch.
 

DavidRa

Infrastructure Architect
Aug 3, 2015
330
153
43
Central Coast of NSW
www.pdconsec.net
Wow, that is brutal. It seems like a great solution for the little guy. One of the advantages is to avoid the NAS cost. However, if the license is $5k, ouch.
S2D needs four nodes, so you're up for $24K minimum. It also kills S2D as the storage for a larger environment - why would you build a (say) 6 node S2D cluster on commodity hardware then spend an extra $36K licensing it to be storage? You're at, or above, similar SAN storage prices now.
 

DBordello

Member
Jan 11, 2013
43
0
6
The licensing does make it hard to think of a good situation for S2D. It seems so braindead that it is almost makes me hopeful Microsoft will fix the license terms.
 

Evan

Well-Known Member
Jan 6, 2016
3,346
598
113
That's all RRP pricing. What you can negotiate is always way way less. I know for vSAN my employer doesn't pay anywhere near RRP.
Same with mine and it's a logical choice for ESX servers but I would have liked to have seen a cheaper way to get the storage licensing as it would help adoption.
 

felmyst

New Member
Mar 16, 2016
27
6
3
28
S2D needs four nodes, so you're up for $24K minimum. It also kills S2D as the storage for a larger environment - why would you build a (say) 6 node S2D cluster on commodity hardware then spend an extra $36K licensing it to be storage? You're at, or above, similar SAN storage prices now.
S2D requires minimum of 2 nodes. It was announced more than two months ago and finally available in TP5 released today.
 

felmyst

New Member
Mar 16, 2016
27
6
3
28
It's not fully dead, if you're running enough vm's on it datacenter licenses start to make a lot of sense(someone here did the calculation I'd have to look it up) It just won't help the little guy like it could have unless they change it pre release
Nope, it's dead. S2D was interesting for small business only: it doesn't scale well in big datacenter environments. 16 node limit if I remember well (12 in TP4), no data locality, high network dependance, deduplication is still not officially supported for anything except VDI and backups (this may change later though). It's just like VSAN, but even worse.
The only case I see for S2D in datacenter is "disaggregated scenario":

It's like SOFS, but made of S2D, not clustered "shared-JBOD" storage spaces.

P. S. Or you can just pirate the shit out of it and then S2D becomes useful (meaning no license cost).
 
  • Like
Reactions: vBuild2BAU