....Getting Into More Trouble....

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

AJXCR

Active Member
Jan 20, 2017
565
96
28
35
.....So I'm trying to get a deal worked out on one of the Supermicro boards that supports 24+ NVME drives..

Before you guys start hammering on me:
1. I know they use a proprietary form factor
2. I've seen Supermicro's disclaimer:

"Due to the complexity of integration, this product is sold as completely assembled systems only (with minimum 2 CPU, 4 DIMM and 6 NVMe). Please contact your Supermicro sales rep for special requirements."

3. I recognize that whatever it is I'm doing probably doesn't require 24x+ NVMe drives...
4. etc. etc. etc.....

That being said, SM's high end NVMe storage solutions appear to accomplish this using the following method/components:

For 24x NVMe:
-1x BPN-SAS3-826TQ-B2B (obtainable): "2-port 2U SAS3 12Gbps backplane, support up to 2x 2.5-inch SAS3/SATA3 HDD/SSD"

-1x BPN-NVME3-216EB (No Source Found): "BPN-NVMe3-216A-S4 Backplane Base Board"

-1x AOC-2UR6N4-i4XT-P (obtainable): "2U Ultra Riser with 4-port 10GbE RJ45 (10GBase-T)"
http://www.supermicro.com/a_images/products/Accessories/AOC-2UR6N4-i4XT.jpg


-2x BPN-NVME3-216EL (No Source Found): "PCIe Gen3x16 input to PLX9765 to support 12x NVMe port" ...Although this simply looks like an NVMe expander similar to what you'd see on a BPN-SAS3-216EL1, but for NVMe

-4x CBL-SAST-0819 (Obtainable): "OCuLink v.91,INT,PCIe NVMe SSD, 65CM,34AWG"

-4x CBL-SAST-0820 (Obtainable): "OCuLink v.91,INT,PCIe NVMe SSD, 85CM,34AWG"

-1x RSC-U2N4-6 (Obtainable ~$80 new): "2U Ultra Riser Card with 4 NVME and PCI-Ex16,RoHS/REACH"

-1x RSC-R1UW-E8R (Obtainable ~$40 new): "RSC-R1UW-E8R-O-P"

....They also make this 1u ultra riser which supports 6x oculink NVMe ports (AOC-URN6-i2XT)...


As well as this AOC which includes 4x oculink, plx, and is both cheap and available (AOC-SLG3-4E4T):



And this AOC w/2x oculink.. even cheaper and available (AOC-SLG3-2E4T):


Regarding the 1x BPN-NVME3-216EB and 2x BPN-NVME3-216EL: There are several commonly available SuperMicro 2.5" combo back planes available which support both SAS3 and some quantity of NVMe (e.g. the BPN-SAS3-216A-N4 2U). This is a similar approach to what Intel seems to currently offer, but in a less modular approach..

Then I stumbled across the following user's manual for the (BPN-NVMe3-216A-N4 2U):
https://www.supermicro.com/manuals/other/BPN-NVMe3-216A-N4.pdf

The nomenclature for the other combo back planes usually specifies the number of NVMe slots using the "-N*" at the end of the PN which threw me off a bit in this case...

An interesting excerpt from the manual:

....I found it very interesting that the parts list for the pre-configured SM chassis list:
1x BPN-SAS3-826TQ-B2B
1x BPN-NVME3-216EB
2x BPN-NVME3-216EL

Whereas the user manual suggests that 1x BPN-NVME3-216EB + 2x BPN-NVME3-216EL = 1x BPN-NVMe3-216A-N4...

Could it be that, because the BPN-SAS3-826TQ-B2B is simply a TQ pass through, it's just being relabeled/re-purposed in combination with the BPN-NVME3-216EB & BPN-NVME3-216EL cards? If so, I see the BPN-SAS3-826TQ's steeply increasing in value.. This might also lead one to believe that the 216EB and 216EL cards (or possibly the 9400-16i/3840a) might work in combination with other SAS3 TQ backplanes...

May have to start an initiative to corner the 2nd had TQ market! :D
 
  • Like
Reactions: T_Minus

Patrick

Administrator
Staff member
Dec 21, 2010
12,511
5,792
113
I like the concept but I think it will be hard to complete less expensively than getting a chassis.
 
  • Like
Reactions: pgh5278

AJXCR

Active Member
Jan 20, 2017
565
96
28
35
I like the concept but I think it will be hard to complete less expensively than getting a chassis.
Well, I'm now the proud new owner of an "X10DRU-i+" (cheap) and an SAS3 TQ 2.5" 24 port SM backplane ($100). Processors are covered, drives are covered, memory is covered, picked up an extra T580-CR yesterday.. ordered a set of SM920SQ PSU's for the 847 that I could toss in here instead. Really, provided that the backplane works as expected, I think it would be preferable to just run serveral 9400-16i's directly connected to a TQ backplane (forgo the expanders entirely). Now I just need to select a chassis...

I guess we'll find out here pretty quickly.
 
Last edited:

AJXCR

Active Member
Jan 20, 2017
565
96
28
35
@Patrick I almost wonder how close a fit it would be for another Intel chassis... the s2600WT2 and the X10DRU-i+ are extremely close in external dimensions...

Listed dimensions are:
Intel: 16.7" x 17"
SM: 17" x 16.8"

I was extremely impressed by the build quality of the Intel unit.


Also really been giving some serious consideration to the possibility of pulling the 2U backplane out of my SC847 and transplanting the GS7200 internals into the lower 2U section under the motherboard tray. This would allow me to both save space and provide serious cfm through the switch chip heat sink.

If attempting that, it wouldn't take a whole lot of extra time to cut and tig weld in the appropriate mounting for converting from one 4U backplane to two rows of 2U backplanes... top all NVME, bottom SAS 3.

Really need time to
 

Biren78

Active Member
Jan 16, 2013
550
94
28
Tig weld and server makes me nervouser

The atx, eatx, ssi eeb servers are all made to work in many cases. These are all custom designs made for custom cases. 2600Wt and the SM look to have mobos that accept PSUs directly without pdb's.
 
  • Like
Reactions: Patrick

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,625
2,043
113
Good luck! As @Patrick said I would buy the chassis ready to drop in your board into if you can, save you a lot of time and headache and ultimately money down the road from having to piece together the backplanes, chassis, etc...

Either way please post the build :)
 
  • Like
Reactions: Patrick

AJXCR

Active Member
Jan 20, 2017
565
96
28
35
Tig weld and server makes me nervouser

The atx, eatx, ssi eeb servers are all made to work in many cases. These are all custom designs made for custom cases. 2600Wt and the SM look to have mobos that accept PSUs directly without pdb's.
Understood... I finished putting together a 2600WT 2 days ago.


That being said, one of our pipeline certified welders can literally cut a soda can in half and weld it bad together using TIG.

This would obviously be on a bare chassis (which is just metal) and would be more to get the caddy recepticle configuration right when moving from 4 rows of horizontal 3.5" trays to two rows of vertical 2.5" trays.

We weld on machines that have highly complex $100k+ PLC systems implemented all of the time. There are simply precautions that must be taken. No big deal
 
Last edited:
  • Like
Reactions: Tha_14

AJXCR

Active Member
Jan 20, 2017
565
96
28
35
Good luck! As @Patrick said I would buy the chassis ready to drop in your board into if you can, save you a lot of time and headache and ultimately money down the road from having to piece together the backplanes, chassis, etc...

Either way please post the build :)
Agreed that this is the correct approach... but 2 parts SC847 with 48 2.5" bays and 1 part GS7200 10 G switch? If done right I'd want to hang it on my wall and frame it.
 

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,625
2,043
113
I'll be honest I have a handful of Intel drive cages I was going to DIY onto some chassis, and then integrate into some towers, etc... purchased random/cheap towers, barebones SuperMicro/generic chassis, etc... then as I got to putting together more than 1 or 2 DIY server/systems and moving onto managing them, spare parts, etc, a whole new world of requirements and "best practices" quickly became apparent :)

When you start deploying 4, 6, 8 + chassis in your local rack (or coLo) the big problem with custom stuff is replacement, repair, and general ability to service. It's not there, and now you have to be doing it all... you can't call up and get a replacement part because it's integrated custom, you can't call the DC and have them replace a backplane because it's custom/integrated differently, etc...

With the qty of parts and scale you can/will accomplish with what you're getting/got I think this is something that's going to come back and bite you if you go too far down the 'custom' rabbit hole :)

Don't let me tell you not to do it someway... search my username I've got/done some crazy ideas too!! Now, years after starting it's amazing how I would have done things differently, and the process going that route now/over time :) If you're in no rush for a server(s) then we're all off waiting for 'great deal' on a barebones system (with mobo) and adding the rest... Intel, SuperMicro, Dell, HP, whatever you like/want to do it's MUCH faster and easier to build and now manage your gear if you're not hunting down a random 24" cable, or an adapter, or a fan splitter, or whatever...

I've learned that what makes a good, reliable and easily serviceable system usually means not DIYing it unless you have hours/days to spend making it like 'new'.

(While a lot of this is true for at home this is more geared toward businesses. DIYing at home usually isn't time sensitive, etc...)
 
  • Like
Reactions: pgh5278

AJXCR

Active Member
Jan 20, 2017
565
96
28
35
I'll be honest I have a handful of Intel drive cages I was going to DIY onto some chassis, and then integrate into some towers, etc... purchased random/cheap towers, barebones SuperMicro/generic chassis, etc... then as I got to putting together more than 1 or 2 DIY server/systems and moving onto managing them, spare parts, etc, a whole new world of requirements and "best practices" quickly became apparent :)

When you start deploying 4, 6, 8 + chassis in your local rack (or coLo) the big problem with custom stuff is replacement, repair, and general ability to service. It's not there, and now you have to be doing it all... you can't call up and get a replacement part because it's integrated custom, you can't call the DC and have them replace a backplane because it's custom/integrated differently, etc...

With the qty of parts and scale you can/will accomplish with what you're getting/got I think this is something that's going to come back and bite you if you go too far down the 'custom' rabbit hole :)

Don't let me tell you not to do it someway... search my username I've got/done some crazy ideas too!! Now, years after starting it's amazing how I would have done things differently, and the process going that route now/over time :) If you're in no rush for a server(s) then we're all off waiting for 'great deal' on a barebones system (with mobo) and adding the rest... Intel, SuperMicro, Dell, HP, whatever you like/want to do it's MUCH faster and easier to build and now manage your gear if you're not hunting down a random 24" cable, or an adapter, or a fan splitter, or whatever...

I've learned that what makes a good, reliable and easily serviceable system usually means not DIYing it unless you have hours/days to spend making it like 'new'.

(While a lot of this is true for at home this is more geared toward businesses. DIYing at home usually isn't time sensitive, etc...)

Jeez.... just rain on my parade why don't you?

We'd have to take some measurements and work it up in cad, but there is a chance it could be integrated quite nicely... basically turning the SC847 into a disk shelf and switch rack.

Remember on the 847 the back half is two 2U layers... the top layer is a sliding shelf that holds the motherboard, and the bottom 2U area is basically a dead space with molex power cables running across it to the 2U backplane.

If that backplane is removed, you basically have a second layer which could easily house a second motherboard (or switch main board). It seems to me that a custom slide out tray integrated into the bottom 2U space (basically a duplicate of the top layer) wouldn't be out of the question. Half of the length in the Gnodal switch is air... it could get a lot shorter.

I think what you'd end up with is something similar to a blade/2 node stacked system. The switch ports would all remain external on the back of the chassis.

Originally this was just a fleeting idea. But now that you guys have turned it into a challenge I think I'll go measure it.
 

AJXCR

Active Member
Jan 20, 2017
565
96
28
35
The crowd chants... DOOO IT!! :)
Ok, someone tell me why I'm way off base here. Observation #1 is that the horizontal portions of the 3.5" HDD caddie supports are simply held in by two screws. Once removed, the trays simply slide out leaving only the vertical sections. This may mean that swapping to a 2u 2.5" vertical backplane is as easy as removing a few screws and buying new drive bays/caddies like the Intel kits (a standardized design like this would make a lot of sense from a manufacturing standpoint):



Next up is the potential for combining the Gnodal GS7200 internals and the SM SC847 case sans rear 2U backplane. Unless I'm missing something big here, this looks like a walk in the park. To really make it professional I could have one of our fab shops build a tray similar to that upper 2U section where the motherboard mounts. From a space perspective however, there is all the room in the world. This mod would save space, guarantee that air would be drawn from the front of the rack, and allow for significantly larger diameter, high flow rate fans (operating at a much lower RPM):







Just a little plug for me.. That server was operational when I walked into the house earlier.
 

AJXCR

Active Member
Jan 20, 2017
565
96
28
35
Just for reference, if someone wanted to do the conversion from horizontal 3.5" to vertical 2.5" on back, the standard 2U Intel bays are the perfect height. The only requirement would be re-positioning of the vertical divider columns. The SM chassis is setup for 4x 6 tray bays and the Intel server is setup for 3x 8 tray bays. I would assume that SM already makes these, but if not, it's a really easy fix. We also know that the "2" variant of the Intel NVMe hotswap add in kit works in the PCIe slot of a Supermicro board and provides 4x drives per card.. so theoretically you could get to 16 NVME drives just on the back plane using existing hardware and no expanders. Here are the Intel bays compared to the SM bays with the horizontal slats removed:






 

AJXCR

Active Member
Jan 20, 2017
565
96
28
35
In my case where think I really am going to integrate the fiber switch and SM SC847 chassis together, the rear backplane will be removed and replaced with the 72 SFP ports making up the Gnodal rear panel.

To address the front I picked up 2x BPN-SAS3-216A 2U 24 port backplanes off Ebay for $100 each. a simple bracket (probably already available from Supermicro from the SC417 and SC418 chassis should fill the 4u space perfectly and allow for a total of 48 SAS3 SSD drives.

Based on on post number one, if it turns out that these backplanes are straight pass through and can be used in combination with the soon to be released NVMe x16 edge HBA's, I'll be a very happy camper. Particularly considering that that I've only got $300 in the SC847 :)
 
  • Like
Reactions: T_Minus

Rand__

Well-Known Member
Mar 6, 2014
6,626
1,767
113
So you hope/assume/think that both the SAS3-A and the -TQ version might be NVME capable? I really wonder whether you're right there;)
 

AJXCR

Active Member
Jan 20, 2017
565
96
28
35
So you hope/assume/think that both the SAS3-A and the -TQ version might be NVME capable? I really wonder whether you're right there;)
It it's simply a pass through, it should be. See the end of post #1 where the Supermicro parts list and the NVMe backplane manual seem to use the SAS and NVMe model numbers interchangeably (they've just tacked on NVME capable expanders).