Converting an HP DL380e Gen8 14xLFF Server to a Disk Shelf

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

audiophonicz

Member
Jan 11, 2021
68
32
18
I see, so youre using full server chassis. I guess youre right and i should be thanking omegadraconis too lol. In all honesty I thought the pic of the two 120mm fans was yours. Thats what I was asking about mountings to. But good to know about the 80mm too since Im thinking of going with a cut down oem chassis. also thinking about a straight 12v psu to try to cut down on the size and heat.

I'd also like to move from the 9207-8i card I'm currently using in the R730 to a 9207-8e and maybe use some SFF-8087 to SFF-8088 cables, but I haven't done much research yet, so I'm not sure about that.
So thats exactly what Im doing, except with the 9206-16e. Same chip, same cables, I just have 4 ports. should work fine for ya.

Looks like you've got a nice clean setup there, and a fine looking rack. Would you mind doing a quick run through of what you've got in there, please. The box above your storage looks to be quite interesting, what model is it?
Thanks! I actually use R730xd's and MD1400s at work, so while I wanted a setup Im used to I also wanted compact, low power, and low noise.

The rack itself is a 13" deep NavePoint 12U Wall Mount Rack, but the usable rack depth is only 10.5" rail to wall.


DAS1.jpg

Top to bottom:

-24 port patch panel to cat6 running thru the house
- MikroTik 24 port 1G switch
- MikroTik 8 port 10G router. Those are all 10G SFP+ Twinax cables.

-The next 2 are my ESXi cluster, theyre both OnLogic 1.5U rack cases flipped backwards. i7-6700T, 32GB, Samsung SSD 500GB boot vols, FSP modular 1U psu, and a Intel X520-DA2 10G card. The little blue lights are HDMI emulators.

-the 2U case is my TrueNAS Core box. Its a modified plinkusa 2U front access case.
I cut about 3.5" off the back of the case so it would fit in the rack depth, used a PicoATX 130w psu and put the power jack on the bottom corner (power brick on the bottom) and above that 3d printed a riser mount and cut a 5.25 bay in the case for a 4x 2.5" hotswap bay. In there i have 4x 1TB Crucial SSDs in RAID5 that is the datastore for the virtual host cluster. i7-8500T, 16GB, dual 64GB SDCards in USB adapters for the mirrored boot vol, the LSI 9206-16e and Intel X520-DA2.

-obviously next is the homebrew DL380 DAS. 6x 4TB WD in RAID6 + 2x HS. I think the max this can run is 6TB drives. those are next. mounted on the shelf with the shelf's lip you can see it takes up more than 2U. It bothers me.:)

-and on the bottom with all the power is a Digital Logger Web Power Switch.

Both ESXi hosts and the NAS have 1x 10G connection to the main network and 1x 10G connection to the iSCSI network. All 3 have the ThermalTake Engine 27 1U LP CPU cooler and Crucial memory. The whole setup is pretty quiet, I cant hear anything from the other side of the garage unless its dead silent. The T i7 chips are the low power ones. The entire rack only uses about 200watts average.
 

amp88

Member
Jul 9, 2020
59
63
18
I see, so youre using full server chassis. I guess youre right and i should be thanking omegadraconis too lol. In all honesty I thought the pic of the two 120mm fans was yours. Thats what I was asking about mountings to. But good to know about the 80mm too since Im thinking of going with a cut down oem chassis. also thinking about a straight 12v psu to try to cut down on the size and heat.
Cool. I'm also hoping that moving from an ATX PSU to the server PSU will improve efficiency a bit. The 450W 80+ Gold Seasonic unit I'm using isn't terrible, but it's at the low end of the load/efficiency curve, so if I can drop a few watts that'd be nice.

So thats exactly what Im doing, except with the 9206-16e. Same chip, same cables, I just have 4 ports. should work fine for ya.
Thanks for the reassurance, I'll hopefully move to that soon then.

Thanks! I actually use R730xd's and MD1400s at work, so while I wanted a setup Im used to I also wanted compact, low power, and low noise.

The rack itself is a 13" deep NavePoint 12U Wall Mount Rack, but the usable rack depth is only 10.5" rail to wall.

Top to bottom:

-24 port patch panel to cat6 running thru the house
- MikroTik 24 port 1G switch
- MikroTik 8 port 10G router. Those are all 10G SFP+ Twinax cables.

-The next 2 are my ESXi cluster, theyre both OnLogic 1.5U rack cases flipped backwards. i7-6700T, 32GB, Samsung SSD 500GB boot vols, FSP modular 1U psu, and a Intel X520-DA2 10G card. The little blue lights are HDMI emulators.

-the 2U case is my TrueNAS Core box. Its a modified plinkusa 2U front access case.
I cut about 3.5" off the back of the case so it would fit in the rack depth, used a PicoATX 130w psu and put the power jack on the bottom corner (power brick on the bottom) and above that 3d printed a riser mount and cut a 5.25 bay in the case for a 4x 2.5" hotswap bay. In there i have 4x 1TB Crucial SSDs in RAID5 that is the datastore for the virtual host cluster. i7-8500T, 16GB, dual 64GB SDCards in USB adapters for the mirrored boot vol, the LSI 9206-16e and Intel X520-DA2.

-obviously next is the homebrew DL380 DAS. 6x 4TB WD in RAID6 + 2x HS. I think the max this can run is 6TB drives. those are next. mounted on the shelf with the shelf's lip you can see it takes up more than 2U. It bothers me.:)

-and on the bottom with all the power is a Digital Logger Web Power Switch.

Both ESXi hosts and the NAS have 1x 10G connection to the main network and 1x 10G connection to the iSCSI network. All 3 have the ThermalTake Engine 27 1U LP CPU cooler and Crucial memory. The whole setup is pretty quiet, I cant hear anything from the other side of the garage unless its dead silent. The T i7 chips are the low power ones. The entire rack only uses about 200watts average.
Interesting, thanks for the run through. That plinkusa front access case was the one which caught my attention, but the 1.5U OnLogic cases are pretty neat too. Very impressive to see what you've managed to fit within the constraints of the rack.

FWIW, I'm running 8TB drives in my setup (currently got 9 drives in two separate pools), so you should be OK with that too. I don't have any higher capacity drives to test with at the moment though, so I don't know about anything bigger.

Good luck with your future experimentation! :)
 
  • Like
Reactions: audiophonicz

audiophonicz

Member
Jan 11, 2021
68
32
18
Cool. I'm also hoping that moving from an ATX PSU to the server PSU will improve efficiency a bit. The 450W 80+ Gold Seasonic unit I'm using isn't terrible, but it's at the low end of the load/efficiency curve, so if I can drop a few watts that'd be nice.
Honestly, while Im not a fan of seasonic, I doubt a HP server psu is going to be any better, and probably a bit louder. But id be interested to see if im wrong.

FWIW, I'm running 8TB drives in my setup (currently got 9 drives in two separate pools), so you should be OK with that too. I don't have any higher capacity drives to test with at the moment though, so I don't know about anything bigger.
8TB drives in the HP cage? nice, I swore i read the cards only do 8TB, and the DL380 G8 maxes out at 6TB, but that could have been referencing the stock HP raid card or just out of date. Good to know. Incidentally, yesterday I just got my first pair of 16TB drives and put them into my backup NAS with 0 issues. That one has a 9202-16e (direct connect breakout cable) and since its the previous 2008 chip family, I would expect the 2308 chip family 9206 and 9207 series to play nice as well.
 
  • Like
Reactions: amp88

amp88

Member
Jul 9, 2020
59
63
18
Honestly, while Im not a fan of seasonic, I doubt a HP server psu is going to be any better, and probably a bit louder. But id be interested to see if im wrong.
My random guesstimation is about one hard drive worth of power saving, so <10 watts going from the 80+ Gold Seasonic to an HP Platinum Plus 460W unit, but I'll see if there's any noticeable difference.

8TB drives in the HP cage? nice, I swore i read the cards only do 8TB, and the DL380 G8 maxes out at 6TB, but that could have been referencing the stock HP raid card or just out of date. Good to know. Incidentally, yesterday I just got my first pair of 16TB drives and put them into my backup NAS with 0 issues. That one has a 9202-16e (direct connect breakout cable) and since its the previous 2008 chip family, I would expect the 2308 chip family 9206 and 9207 series to play nice as well.
Yeah, I've got a mix of 8TB drives in two pools in the HP cage (with the SAS expander backplane, 647407-001). One pool has shucked 5400rpm WD white label drives and the other has a mix of 7200rpm drives (Seagate SATA and HGST SAS), and I haven't had any issues so far. Those spec sheets and upper limits for capacity are often outdated/misleading, unless it's a known hard cap (e.g. the 2TB limit on some older hardware), but I'm happy to report 8TB works here. I don't know when I'll get my hands on any larger drives; I'm gonna take some time to let my wallet recover! ;)
 
  • Like
Reactions: audiophonicz

audiophonicz

Member
Jan 11, 2021
68
32
18
So... this thread finally got me the motivation to refresh my DAS. I hope this helps someone else like it did me.

The bad news first: I have confirmed that the HP DL380 G8 Expander cage does NOT recognize 16TB drives. :(
Besides that, upon further inspection of the HP DL380 chassis, the fan mounts are pushed too far back for me to fit everything I need, so I went ahead with the alternate plan. Progress is as follows:

got me some 80mm fans like you have, and a little PCI bracket fan controller thingie. Its basically just a pot with the right connections. I made a bracket to mirror the stock type, spacing them out so it pulls thru the little slits and also covers some of the expander chip heatsink. Its held down by 3x standard M3 case screws threaded into the shelf plate. The first fan has a grill to keep the miniSAS connectors away from it.

DAS1.jpg
DAS2.jpg


I cut and soldered the fan wires to the connector for the fan adjustment, which was kinda pointless cuz theyre inaudible at full blast, but turns out it has a nice little blue led so I know when the unit is powered on. The old PSU had a green one but the new PSU has no lights. Mounted the adjustment on the rear side of the case for easy access thru the side panel.

DAS3.jpg


Of course I decided to use a straight 12v PSU so for the 3.3v line I got some of these little guys and integrated one into the wire harness. Just wrapped some electric tape to hold it to the connector. There is another one pictured on the side so you can see how tiny they are. Theres also a little red LED on these so you can verify theyre working.

DAS4.jpg
DAS5.jpg

All finished, so far so good. Thanks for yous guys contributions and getting me motivated to fix my DAS. Good luck to you on yours!
(I tried to get a pic of them all lit up but they just move too fast...)

DAS6.jpg
DAS7.jpg
DAS8.jpg
 

Attachments

  • Like
Reactions: amp88

amp88

Member
Jul 9, 2020
59
63
18
So... this thread finally got me the motivation to refresh my DAS. I hope this helps someone else like it did me.
Very nice job, looks great. Well done. :)

The bad news first: I have confirmed that the HP DL380 G8 Expander cage does NOT recognize 16TB drives. :(
Hmm, I'm just looking through your post and I see you've used a voltage regulator to provide the 3.3 volt input that's in the original connector. I don't know if you've already considered this, but there's a possibility the 16TB drives you have won't function when they're supplied with a 3.3 volt input. If you've already had them running in your backup NAS with a 3.3 volt input in the power connector then just ignore this because it won't be the issue.
Some drives, especially shucked (e.g. the WD white label drives that come in external enclosures) or newer enterprise drives, simply won't spin up when connected to a power source which has a 3.3v input. I don't know if there's a comprehensive list anywhere, but if you search for the model number of the drive along with "3.3v", "3.3v fix", "PWDIS" or "Power Disable" you may get some information. If you can't find a definite answer one way or the other you can test for it quite easily by covering the 3.3v pins on the SATA power connector of the drive with some insulating tape (ideally something that's not too thick, like Kapton, just so it's not too tight a fit inside the SATA power connector). Here's a video from Art of Server on YouTube with some more explanation, and a short guide on Instructables.
 

audiophonicz

Member
Jan 11, 2021
68
32
18
Hmm, I'm just looking through your post and I see you've used a voltage regulator to provide the 3.3 volt input that's in the original connector. I don't know if you've already considered this, but there's a possibility the 16TB drives you have won't function when they're supplied with a 3.3 volt input. If you've already had them running in your backup NAS with a 3.3 volt input in the power connector then just ignore this because it won't be the issue.
Some drives, especially shucked (e.g. the WD white label drives that come in external enclosures) or newer enterprise drives, simply won't spin up when connected to a power source which has a 3.3v input. I don't know if there's a comprehensive list anywhere, but if you search for the model number of the drive along with "3.3v", "3.3v fix", "PWDIS" or "Power Disable" you may get some information. If you can't find a definite answer one way or the other you can test for it quite easily by covering the 3.3v pins on the SATA power connector of the drive with some insulating tape (ideally something that's not too thick, like Kapton, just so it's not too tight a fit inside the SATA power connector). Here's a video from Art of Server on YouTube with some more explanation, and a short guide on Instructables.
This kills me... why you ask?

Because, yes, the 16TBs are specifically shucked WD Whites.

Because way back when, I couldnt find a pinout for those connectors so I actually bought an OEM one that specifically had colored wires instead of all black, and re-pinned my connectors based off that. That wire was red, not orange, so for the last year ive been running 5v to that pin. I only went with the 3.3v because your pinout pic had that pin at 3.3v. All my other WD Blues spin up fine, but since you mentioned specifically WD Whites...

and Because I did put the WD Whites into my backup NAS, and formated them... in a 3in2 hotswap bay that can be powered by 15pin SATA or 4pin Molex connectors, and of course i went with the 4pins that dont actually provide the 3.3v, so I cant say for sure whether its converting down to 3.3v for the internal SATA/SAS or not.

Now the only way i can confirm/deny this for sure is to pull it all out again and swap the 3.3v buck down for a 5v one (which i already purchased with the 3.3v in case I needed the 5v.) or cut it off completely. ...then again i do have tons of kapton.
 
  • Like
Reactions: amp88

amp88

Member
Jul 9, 2020
59
63
18
This kills me... why you ask?

Because, yes, the 16TBs are specifically shucked WD Whites.
I'm about 99.9% sure that's the issue then.

Now the only way i can confirm/deny this for sure is to pull it all out again and swap the 3.3v buck down for a 5v one (which i already purchased with the 3.3v in case I needed the 5v.) or cut it off completely. ...then again i do have tons of kapton.
I'd definitely try Kapton on the pin from either of the 2 links above as a first resort.
 

DieBlub

Member
Dec 11, 2017
32
15
8
Sorry for digging up this thread again. I was just wondering if the 2 port variant of the backplane requires both to be connected? Or is this just for multipath or failover?
 

omegadraconis

Member
Oct 23, 2017
28
26
13
38
Sorry for digging up this thread again. I was just wondering if the 2 port variant of the backplane requires both to be connected? Or is this just for multipath or failover?e
I'm not sure about multipath, but I know that failover works. In my testing with an lsi 2308-e I was able to pull one link cable and freenas reported no changes in the system logs, all disks still present. It seemed like multipath was not being reported on freenas when I had both cables connected, but it's hard to tell if my setup would support it regardless.
As these shelves/back-planes in particular came out of dl380p Gen8's, HP's documentation might tell you if they would support multi-path with the correct controller.
 
  • Like
Reactions: amp88 and DieBlub

DieBlub

Member
Dec 11, 2017
32
15
8
I'm not sure about multipath, but I know that failover works. In my testing with an lsi 2308-e I was able to pull one link cable and freenas reported no changes in the system logs, all disks still present. It seemed like multipath was not being reported on freenas when I had both cables connected, but it's hard to tell if my setup would support it regardless.
As these shelves/back-planes in particular came out of dl380p Gen8's, HP's documentation might tell you if they would support multi-path with the correct controller.
Thanks for the reply. I was actually mostly thinking about getting two of these to work off of one 2 port SAS controller, that's why I asked. I'll try to dig a little deeper. Maybe I can come up with a very jank setup haha.
 

audiophonicz

Member
Jan 11, 2021
68
32
18
I was actually mostly thinking about getting two of these to work off of one 2 port SAS controller, that's why I asked.
You can totally do this. The 12LFF backplane has an integrated expander chip built in so you only need the 1 channel hooked up. The 3 ports are mainly for failover, as multipath is more of a software/driver support kinda thing on the HBA or RAID card youre controlling them with, not the cage itself. I believe the 3rd port is supposed to reach back to a rear drive cage, which would only be a single connection.

That said, its a 6G expander backplane, and the 16 channel LSI variants of 6G SAS are stupid cheap these days, so you could always throw in a 4 port card and have dual connections to both shelves for like 50bux.
 

gb00s

Well-Known Member
Jul 25, 2018
1,248
666
113
Poland
Maybe think about covering the disk shelf including the fans and the gaps between the fans (with hard foam you can cut with a cutter-knife). This will make a significant diff in HDD temps.

HDD_Fan_Cover.png
 

omegadraconis

Member
Oct 23, 2017
28
26
13
38
Since this thread has been getting more attention I thought I would post my completed version. In my original configuration, I had the drive cage in a 4u shelf, this took up too much room so I downsized to a 1u shelf. I also liked @audiophonicz fan mounting bracket, so I borrowed that idea. Using 1/2inch aluminum angle stock I drilled fan holes and cut the fan curve for airflow. The pwm Fans came with rubber mounts so I reused those and added pvc fan grills for cable protection. I was able to squeeze 5x80mm fans in with just enough room on the sides for cabling. I did add a shroud of cardboard for better airflow after testing showed a lot of air loose on the sides. Power to the self is coming from an HPE 400W server psu with a pci-e adapter.

My DIY fan pwm controller/power board files are stored In my github for anyone who's interested. If I were to make a version 2 of the board I would probably make adjustments to the +12v and ground planes to make soldering easier. Trying to heat up the whole plane was pretty taxing on my ts100.
 

Attachments

audiophonicz

Member
Jan 11, 2021
68
32
18
@omegadraconis

Just out of curiosity cuz of the shroud comments, what are your drive temps now? Mine are sitting at 39-41*C without the shroud. the 4x 80mm fans are running at around 1050RPM (about 70% of max)
 

omegadraconis

Member
Oct 23, 2017
28
26
13
38
@audiophonicz
I'm sitting about the same, running 10 x ST3000NM0023's I see a range of 37 to 41C. Without the shroud they were running a few C higher. My fans are in the ~3000 to 3500 range, they will move 46CFM @ 4000rpm Delta AFB0812SH-sm26. The shroud is helping as I can feel the airflow difference, but it's not an amazing difference. If I could get a "proper" seal between the fans and backplane to allow some pressure buildup or switch to some 80x80x38 Deltas I would expect get a better result.
 

audiophonicz

Member
Jan 11, 2021
68
32
18
So just to summarize:

Mine: is in a small, enclosed, filled rack... has 4x 17CFM fans running at 1000RPM... no shroud... on a static rotary controller.

Yours: is in a large, open, spaced out rack... has 5x 46CFM fans running 3000RPM... semi-shrouded... on a custom PMW controller (i assume varies by temp)

and were seeing about the same temps, all within the normal operating range. Begs the question... is the extra effort really worth it?
 

pulla

New Member
Jun 12, 2021
8
0
1
Finland
Intresting project, i was thinking something similar and found this one.

I have as my homeserver (NAS, VMs) HP Proliant DL180 G6 SE326M1 with 14x LFF drives (12x front, 2x rear).

SAS Controller SAS9205-8i HBA controller IT-mode with built-in SAS expander in backplane. Single cable between controller and backplane and 12 disks SAS/SATA disk (sizes between 2 and 8 TB) and couple software RAIDs.

Today connected HP's SAS controller to my old desktop computer (AMD FX) and cable from backplane to FX machine, removed i2c cable between Proliant and backplane and when Proliant is powered up and FX detect's drives without any problem. Got decent speeds from 6x disk RAID10, reading likely 450MB/s which looks enough good. LED's are working also.

I was thinking buying something HP DL360p G8 or DL380p G8 or Dell R620 or R720 and using this as DIY DAS for old disks. Really not have mood running this as NAS if buying something newer. Power hungry and tons of other reasons and extra slots for disk never bad idea.

Connecting backplane to orginal PSU, throwing motherboard away and figure out how to powering up machine. Shortcircuit pins as in normal ATX psu might be enough? didnt find yet right pins or test. I have already arduino powered temperature controlled fan control so no need worry about it.

Im curious how crazy Proliant G8 will go with fans if im hooking this my DIY DAS there... ?
 

tulljt

New Member
Jan 13, 2021
2
2
3
I did this exact same thing. You can use a GPU mining power adapter that fits your PSU and it will normally have a way of powering it on. However most of these only come in 12v and I think the rear units require 5v, which you could buy one that provides 5v or use a step down converter. Just use the wiring info that is provided in the first part of this thread. As for fans I 3d printed fan holders from thingiverse and used Evercool EC6025SL12EA hooked to a fan controller.
IMG_0584.jpgIMG_0585.jpg
 

pulla

New Member
Jun 12, 2021
8
0
1
Finland
Oh i didnt realize that those are GPU mining power adapters , Nice. I have working HSTNS-PL18 (506821-001) 750w PSU, second one seems to be dead according server but didnt check it closer. 5V is needed but it's not problem getting it.

@tulljt is that power adapter board thatkind that it will automatically turn on when power cable connected? it would be enough good for me so no need really having power button.

Looks good, thanks for tips for everyone. I will order all required parts (including new server) and let see how it will go.