So...how many 2.5" disks can you put in a Chenbro NR40700?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

kapone

Well-Known Member
May 23, 2015
1,765
1,145
113
In addition to the 48x 3.5" bays?

Well, I guess the answer is 16 (not including the 2x rear 2.5" hot swap bays). Using some of the el-cheapo brackets for holding 16x internal SSDs (needed some scratch SSD space in my SAN nodes), they just fit.

IMG_0175 copy.jpg

The SSD enclosures take up all of the area to the "right" of the motherboard, if you're looking from the front and one of em is mounted behind the CPU. Each holds four SSDs. I mean...it was all empty space anyway... :)

(Yes, I know...my cabling is not exactly the cleanest...it's functional). The chassis is fairly cramped now, but all within spec. The SSDs are at ~30-32C, CPU etc all low. The only heat lamp is the Adaptec 8 series RAID card. That needed a fan.
 

ca3y6

Well-Known Member
Apr 3, 2021
657
640
93
Nice. What is the device between the SSDs and the motherboard? Doesn't look like an expander given the number of ports.

What do you use for the cabling for powering the SSDs? This has been a major headache for me, with bad quality splitters, that are also applying a ton of pressure on the connectors and taking up lots of space. You seem to have found something much more compact, I can hardly see them.
 

kapone

Well-Known Member
May 23, 2015
1,765
1,145
113
What is the device between the SSDs and the motherboard? Doesn't look like an expander given the number of ports.
SATA coupler…Didn’t have long enough thin SATA cables in my parts bin…:p

And the SSDs are powered by a custom harness that needed to be built. Had SATA power plugs on hand that can be crimped, creating extremely short/positional cables.

Notice the electrical tape…covering my amazing soldering job! I’ll clean this up…some day.
 
  • Like
Reactions: nexox and ca3y6

kapone

Well-Known Member
May 23, 2015
1,765
1,145
113
p.s. These SAN nodes have ~1PB of storage each (about half replicated). 48x 22TB HDDs in Raid-60 (two RAID 6 spans of 22+2 spares each).
 

kapone

Well-Known Member
May 23, 2015
1,765
1,145
113
Well, I guess the cleanup is coming sooner than I thought. Looks like I need to up the flash side of the storage, which will require additional NVME storage. Except...this motherboard (X9SRL-F) is not the right one for multiple pci-e NVME cards. Not enough slots.

So...went hunting for new motherboards for the SAN nodes and guess what?

Screenshot 2025-08-04 at 8.35.42 AM.png

These X9DRX boards are cheaper than a lunch! (And should be perfectly fine for file server/SAN duties). TEN pci-e 3.0 x8 slots should be more than enough.

It'll require some creativity to fit into the NR40700 chassis, but it should fit (The motherboard tray spans the whole width and is essentially empty, and it already supports E-ATX boards).

We'll see.
 
  • Like
Reactions: seadog2441

ca3y6

Well-Known Member
Apr 3, 2021
657
640
93
Does it need to be AIC SSD (PCIe cards)? Can't you go for U.2 SSD? There you can use a PCIe switch that doesn't require bifurcation, there is a whole thread about them. You can find PCIe x8 to 4x U.2, and PCIe x16 to 8x U.2.
 
  • Like
Reactions: seadog2441

kapone

Well-Known Member
May 23, 2015
1,765
1,145
113
Does it need to be AIC SSD (PCIe cards)? Can't you go for U.2 SSD? There you can use a PCIe switch that doesn't require bifurcation, there is a whole thread about them. You can find PCIe x8 to 4x U.2, and PCIe x16 to 8x U.2.
That's on the table as well, but the X9SRL-F is already out of usable 3.0 slots, so a motherboard change was needed anyway.
 

seadog2441

New Member
Mar 19, 2023
23
5
3
Does it need to be AIC SSD (PCIe cards)? Can't you go for U.2 SSD? There you can use a PCIe switch that doesn't require bifurcation, there is a whole thread about them. You can find PCIe x8 to 4x U.2, and PCIe x16 to 8x U.2.
Very good point actually, with the present of U.2 in sff as same as 2.5"Sata drive, OP can skip through 2.5 "Sata and 2280/22110 Nvme with minimal effort while maintaining hot and slow tiers respectfully, keeping the setup much much simpler imo.
 
Last edited:

kapone

Well-Known Member
May 23, 2015
1,765
1,145
113
Very good point actually, with the present of U.2 in sff as same as 2.5"Sata drive, OP can skip through 2.5 "Sata and 2280/22110 Nvme while maintaining hot and slow tiers respectfully, keeping the setup much much simpler imo.
You're pretty much on point. While I crammed those SATA SSDs in there for scratch space, my needs seem to be changing, as I'm switching to an active-active Postgres setup on the backend. SATA SSDs just won't have enough queue depth for that. NVME it is, but in what form factor remains to be seen.

Edit: I just didn't want to be constrained by the current motherboard, hence the hunt for a new motherboard. And the geeky side of me is kinda tickled...looking at that huge motherboard and all those slots.
 
  • Like
Reactions: seadog2441

seadog2441

New Member
Mar 19, 2023
23
5
3
You're pretty much on point. While I crammed those SATA SSDs in there for scratch space, my needs seem to be changing, as I'm switching to an active-active Postgres setup on the backend. SATA SSDs just won't have enough queue depth for that. NVME it is, but in what form factor remains to be seen.

Edit: I just didn't want to be constrained by the current motherboard, hence the hunt for a new motherboard. And the geeky side of me is kinda tickled...looking at that huge motherboard and all those slots.
Yeah, no need for an explaination, an accient $60 X9 just to prove your concept, good enough for us, we're all there. Go for it :D
 

kapone

Well-Known Member
May 23, 2015
1,765
1,145
113
The X9DRX is a ... big motherboard. I knew that going in, but to actually put it in (with some hacking) and see the result...

IMG_0177 copy.jpg

It takes up almost the entire width of the NR40700 motherboard tray...

IMG_0178 copy.jpg

The motherboard even hangs off ~1/4" depth wise. Those are Dell 3U+ active heatsinks. Had to hack the screws, as the stock screws won't fit a Supermicro motherboard, but they're very nice heatsinks. ~20C on idle on air, with a 130w CPU (They're rated for 160w CPUs).

Had to take the Dremel to the chassis and carve out some space in the back.

IMG_0179 copy.jpg

But it will all fit. :) I have to admit...it looks so...empty...with so much motherboard...:D
 
Last edited:

kapone

Well-Known Member
May 23, 2015
1,765
1,145
113
It's in. :)

The scratch SSDs got distributed between the two nodes (so, 8x SSDs per node. These are 500gb SSDs, so ~4TB of SSD scratch space per node), relocated to the lower part of the chassis where the PSUs are. The hot swap cage for the boot drives got relocated to the bottom of the chassis as well.

IMG_0180 copy.jpg

p.s. Wiring everything in the lower part of the chassis was a pain, it's fairly cramped in there now.

And the motherboard/tray in, all wired up and tested successfully.

IMG_0181 copy.jpg

Even after adding the two Mellanox nics, Adaptec HBA and the Fusion IO SX350...so many slots.... :)
 

kapone

Well-Known Member
May 23, 2015
1,765
1,145
113
Added an "exhaust" fan a.l.a Supermicro style to draw air through the Adaptec heatsink (and any other AIC/SSDs I decide to add on the remaining pcie slots. I may add another one depending on what type and how many of SSDs I choose.

IMG_0182 copy.jpg
 
  • Like
Reactions: itronin and nexox

kapone

Well-Known Member
May 23, 2015
1,765
1,145
113
Ahh...stress testing revealed a few issues with this. Once the SSDs were added and a full scrub/patrol read was going on all 48 drives, the 5v rail started drooping and issues surfaced. After much swearing and tearing my hair out (not that I have any), it looks like more 5v is needed (the SSDs grabbed almost 8A from the 5V rail and the HGST HDDs need 1A per disk during heavy sequential operations), the 12v is perfectly fine.

I'm muddling this in my head, whether to redo the entire power supply setup with something else (with more 5v amps), or just add a buck converter to each PSU that's driving the expanders (the server has its own PSU).

Anybody have any good/bad reviews about this particular one?


Seems (relatively) well made and 30A is enough to drive the 24 HDDs in each expander (even the power hungry HGSTs). The SSDs will continue to be powered by the 5V rail from the HP PDB (it's rated for 20A on the 5v rail). So, ~50A worth of +5V juice per expander/SSDs combo.

p.s. I wish I could fit the Supermicro 847 PDB and PSUs in here (they're too long). That PDB is rated for a whopping 100A on the 5V rail.
 
  • Like
Reactions: ca3y6

kapone

Well-Known Member
May 23, 2015
1,765
1,145
113
Well..thermal testing is not looking good.

With all 48 bays filled with HGST 10TB HDDs, two Adaptec controllers, two SX350 6.4TB, two 40gb nics and a few odds and ends, things are getting a little toasty. The main culprit is that the server components sit "behind" those 48 HDDs, which generate a lot of heat. Unless I run the fans at stupid levels, the heat is becoming a little uncomfortable.

I have half a mind of chopping up the case in half, essentially making it 8U (I have more then enough rack space/DIY space) and let the server components have their own airflow.

Decisions...decisions.
 

ca3y6

Well-Known Member
Apr 3, 2021
657
640
93
On the topic of crimping SATA power cables. Where you using a crimp connector or a push in connector?

I am thinking of creating my own custom cables using push in connectors (since it seems to be the solution that I am the least likely to **** up). I am concerned about the failure modes (short circuits, fire, damaging SSDs). I am going to connect SATA SSD, so they won't pull much in term of power, but I will still have 10-12 drives per cable, which should be roughly equivalent to the power consumption of couple of HDD. But having never done any custom cable, I am wondering if I should do any soldering on top of the push in connector? It seems to me that the connectors are so close to each others that I am more likely to short them than improve the connectivity.
 

nexox

Well-Known Member
May 3, 2023
1,831
888
113
Push in connectors are super easy, I've moved them around on PSU cables to line up with my drives, you just press the wires in with a flat head screwdriver then snap on the back cover, if that fits it means you got the wires in all the way (with a quality connector brand, anyway.) Don't solder those, there's insulation and plastic all around and you'd just melt things. Make sure you're using the thickest gauge conductors supported (likely 18AWG) rated for a reasonably high temperature (probably 90C or higher) to get the most current handling (but not silicone insulation even though that's rated to 200C or so, because the insulation is thicker and probably won't fit the press-in connectors.)
 
  • Like
Reactions: kapone and ca3y6

kapone

Well-Known Member
May 23, 2015
1,765
1,145
113
Although, I didn't crimp any Sata power cables (yet). Had a whole bunch of 1-->4 power harnesses lying around in my parts bin.