Build Name: Dave's TestBed.
OS: Server 2016 Enterprise
CPU (per node): Dual 8 core (16 thread) Xeon v1
Motherboard: Reclaimed IBM M4 servers.
Chassis:
Drives: See below
RAM: 128G
Add in Cards (per node): LSI 9361-8i, M5110 (host OS), Mellanox VPI 40Gb cards.
Other Bits: DAS SAS3 Expanders.
My co-worker and I have been playing around with a 2 node 2016 HyperV Cluster here at work. We’ve got it running on a couple of older (reclaimed) Lenovo Servers (see above). For our limited testing, they work great, and lets us play and learn (tear apart, rebuild, relearn). Our goal is to eventually use this testing playground as proof-of-concept for a 2-3 node, 20 to 30 Hyper-V cluster, some of which would probably be 20-50 user databases.
We want to see what this thing will run like with Storage Spaced Direct, running an approach similar to a hyperconverged solution. We’d use the 40Gb Mellanox connection for the entirety of node-to-node communication and each node would have a SAS3 DAS unit. Something small, like the 12 bay, 2U Supermicro JBOD SAS3 units (826BE1C / 826BE2C). Storage would look like this:
* 8 x 6TB Seagate SAS3 drives (ST6000NM0105, 4kN, 7200 RPM (storage tier)
* 4 x MicronS630DC (2-3 DWPD) SAS3 960Gb SSD (caching tier)
* Total Capacity at build: 18TB.
We’re over-sizing the cache a *bit* due to our understanding that both reads and writes would be cached with this storage device layout (correct me if I’m wrong). However, I’ve gotten out of my depth due to ignorance of intricacies of SAS. I’ve read far too many web pages, wiki articles and spec sheets. So if you don’t mind a few questions:
1) Is there any chance we’d actually see improved performance of that *theoretical* 24Gb bandwidth (via the dual-port SAS3 devices) the vendors are fond of stating? I guess my major concern is those Supermicro Chassis. I've read they use re-branded LSI components, but that doesn't mean I understand all those intricacies. Given the $$$ involved, I'm just trying to make sure what I get from that setup beats a bunch of NL-SAS drives with standard SATA SSD's.
2) Our LSI cards. Nice cards, served us well as RAID cards, but are we wasting their potential in a S2D configuration? Would it make more sense to use something a lot more stripped down such as LSI SAS3008 based cards (i.e. Lenovo N2215)? Again, my concern is that stripped down controllers will lead to lowered performance, driving down the performance delta of the dual-port SAS3 channel versus NL-SAS and SATA SSD's.
Thank you all in advance for your help, suggestions and experience.
Dave
OS: Server 2016 Enterprise
CPU (per node): Dual 8 core (16 thread) Xeon v1
Motherboard: Reclaimed IBM M4 servers.
Chassis:
Drives: See below
RAM: 128G
Add in Cards (per node): LSI 9361-8i, M5110 (host OS), Mellanox VPI 40Gb cards.
Other Bits: DAS SAS3 Expanders.
My co-worker and I have been playing around with a 2 node 2016 HyperV Cluster here at work. We’ve got it running on a couple of older (reclaimed) Lenovo Servers (see above). For our limited testing, they work great, and lets us play and learn (tear apart, rebuild, relearn). Our goal is to eventually use this testing playground as proof-of-concept for a 2-3 node, 20 to 30 Hyper-V cluster, some of which would probably be 20-50 user databases.
We want to see what this thing will run like with Storage Spaced Direct, running an approach similar to a hyperconverged solution. We’d use the 40Gb Mellanox connection for the entirety of node-to-node communication and each node would have a SAS3 DAS unit. Something small, like the 12 bay, 2U Supermicro JBOD SAS3 units (826BE1C / 826BE2C). Storage would look like this:
* 8 x 6TB Seagate SAS3 drives (ST6000NM0105, 4kN, 7200 RPM (storage tier)
* 4 x MicronS630DC (2-3 DWPD) SAS3 960Gb SSD (caching tier)
* Total Capacity at build: 18TB.
We’re over-sizing the cache a *bit* due to our understanding that both reads and writes would be cached with this storage device layout (correct me if I’m wrong). However, I’ve gotten out of my depth due to ignorance of intricacies of SAS. I’ve read far too many web pages, wiki articles and spec sheets. So if you don’t mind a few questions:
1) Is there any chance we’d actually see improved performance of that *theoretical* 24Gb bandwidth (via the dual-port SAS3 devices) the vendors are fond of stating? I guess my major concern is those Supermicro Chassis. I've read they use re-branded LSI components, but that doesn't mean I understand all those intricacies. Given the $$$ involved, I'm just trying to make sure what I get from that setup beats a bunch of NL-SAS drives with standard SATA SSD's.
2) Our LSI cards. Nice cards, served us well as RAID cards, but are we wasting their potential in a S2D configuration? Would it make more sense to use something a lot more stripped down such as LSI SAS3008 based cards (i.e. Lenovo N2215)? Again, my concern is that stripped down controllers will lead to lowered performance, driving down the performance delta of the dual-port SAS3 channel versus NL-SAS and SATA SSD's.
Thank you all in advance for your help, suggestions and experience.
Dave