8 x SSD drives in a Dell PowerEdge T140 in place of 4 x 3.5 HDDs... Need Y PCIe 6pin adapter

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

sonkam

New Member
Aug 16, 2022
4
0
1
@sonkam
I use 2Tb 870 disks in all of my Poweredges as their bootdisk (and homes, software, etc..), they're very reliable and don't overheat.
I recommend against using 870's in RAID5 as it will wear them out faster but then again it depends on your workloads. (even though they are TLC, their endurance isn't stellar).
The brackets I used were ICY Dock EZ-FIT MB082SP PRO 2 x 2.5 Inch to 3.5 Inch Drive Bay SATA SSD/HDD Mounting
I hope that answers your question.
Regards,
this bracket has 2 slots, I need a single, as I will use a total of 4 disks at most
 

billbillw

New Member
Feb 5, 2018
26
14
3
53
I am having this dilemma now as well. I'm trying to shrink my server size and wattage using a T140. I want to use my existing ZFS Pool of four 8TB 3.5" drives and have a few small SSDs for boot and apps. Based on my searching, I think soldering, or re-pinning that PCIe connector are the only options. Dang, if Dell would just include a single spare MOLEX 4-pin out of the power supply, this would be easy as pie.
 
  • Like
Reactions: ElCoyote_

ElCoyote_

Active Member
Jul 22, 2016
212
125
43
T140 is still rocking here. I have since then upgraded to an 8Gb H740P.
Unfortunately, I've gone with more Enterprise SSDs (4x7.68tb + 1x15.36tb), and those drives are slightly too tall to fit double-stacked in a 3.5" bay so I had to get creative and used the empty space underneath the DVDROM drive to fit a small 2.5" drive cage.
I am not sure if 8 x HDD's would work in the machine because of the power draw but as far as SSDs go it's been working great all these years:


Code:
# megaclisas-status
-- Controller information --
-- ID | H/W Model          | RAM    | Temp | BBU    | Firmware
c0    | PERC H740P Adapter | 8192MB | 67C  | Good   | 51.16.0-5150

-- Array information --
-- ID | Type   |    Size |  Strpsz |   Flags | DskCache |   Status |  OS Path | CacheCade |InProgress
c0u0  | RAID-0 |   1818G |   512KB | ADRA,WB |  Enabled |  Optimal | /dev/sda | None      |None
c0u1  | RAID-0 |   6985G |   512KB | ADRA,WB |  Enabled |  Optimal | /dev/sdb | None      |None
c0u2  | RAID-0 |   6985G |   512KB | ADRA,WB |  Enabled |  Optimal | /dev/sdc | None      |None
c0u3  | RAID-0 |   6985G |   512KB | ADRA,WB |  Enabled |  Optimal | /dev/sdd | None      |None
c0u4  | RAID-0 |   6985G |   512KB | ADRA,WB |  Enabled |  Optimal | /dev/sde | None      |None
c0u5  | RAID-0 |  13971G |   512KB | ADRA,WB |  Default |  Optimal | /dev/sdf | None      |None

-- Disk information --
-- ID   | Type | Drive Model                                      | Size     | Status          | Speed    | Temp | Slot ID  | LSI ID
c0u0p0  | SSD  | S620XXXXXX Samsung SSD 870 EVO 2TB SVT02B6Q | 1.818TB  | Online, Spun Up | 6.0Gb/s  | 36C  | [:0]     | 0
c0u1p0  | SSD  | HGST SDLL1HLR076TCCA1Y150XXXXXXXXXXXXXXXXXXX    | 6.985TB  | Online, Spun Up | 12.0Gb/s | 50C  | [:5]     | 5
c0u2p0  | SSD  | HGST SDLL1HLR076TCCA1Y150XXXXXXXXXXXXXXXXXXX    | 6.985TB  | Online, Spun Up | 12.0Gb/s | 42C  | [:1]     | 1
c0u3p0  | SSD  | HGST SDLL1HLR076T5CHSHW09XXXXXXXXXXXXXXXXXXX    | 6.985TB  | Online, Spun Up | 12.0Gb/s | 44C  | [:2]     | 2
c0u4p0  | SSD  | HGST SDLL1HLR076T5CHSHW09XXXXXXXXXXXXXXXXXXX    | 6.985TB  | Online, Spun Up | 12.0Gb/s | 43C  | [:3]     | 3
c0u5p0  | SSD  | WDC WUSTR1515ASS201 B9XXXXXXXXAEA                 | 13.971TB | Online, Spun Up | 12.0Gb/s | 47C  | [:4]     | 4
 

billbillw

New Member
Feb 5, 2018
26
14
3
53
T140 is still rocking here. I have since then upgraded to an 8Gb H740P.
Unfortunately, I've gone with more Enterprise SSDs (4x7.68tb + 1x15.36tb), and those drives are slightly too tall to fit double-stacked in a 3.5" bay so I had to get creative and used the empty space underneath the DVDROM drive to fit a small 2.5" drive cage.
I am not sure if 8 x HDD's would work in the machine because of the power draw but as far as SSDs go it's been working great all these years:


Code:
# megaclisas-status
-- Controller information --
-- ID | H/W Model          | RAM    | Temp | BBU    | Firmware
c0    | PERC H740P Adapter | 8192MB | 67C  | Good   | 51.16.0-5150

-- Array information --
-- ID | Type   |    Size |  Strpsz |   Flags | DskCache |   Status |  OS Path | CacheCade |InProgress
c0u0  | RAID-0 |   1818G |   512KB | ADRA,WB |  Enabled |  Optimal | /dev/sda | None      |None
c0u1  | RAID-0 |   6985G |   512KB | ADRA,WB |  Enabled |  Optimal | /dev/sdb | None      |None
c0u2  | RAID-0 |   6985G |   512KB | ADRA,WB |  Enabled |  Optimal | /dev/sdc | None      |None
c0u3  | RAID-0 |   6985G |   512KB | ADRA,WB |  Enabled |  Optimal | /dev/sdd | None      |None
c0u4  | RAID-0 |   6985G |   512KB | ADRA,WB |  Enabled |  Optimal | /dev/sde | None      |None
c0u5  | RAID-0 |  13971G |   512KB | ADRA,WB |  Default |  Optimal | /dev/sdf | None      |None

-- Disk information --
-- ID   | Type | Drive Model                                      | Size     | Status          | Speed    | Temp | Slot ID  | LSI ID
c0u0p0  | SSD  | S620XXXXXX Samsung SSD 870 EVO 2TB SVT02B6Q | 1.818TB  | Online, Spun Up | 6.0Gb/s  | 36C  | [:0]     | 0
c0u1p0  | SSD  | HGST SDLL1HLR076TCCA1Y150XXXXXXXXXXXXXXXXXXX    | 6.985TB  | Online, Spun Up | 12.0Gb/s | 50C  | [:5]     | 5
c0u2p0  | SSD  | HGST SDLL1HLR076TCCA1Y150XXXXXXXXXXXXXXXXXXX    | 6.985TB  | Online, Spun Up | 12.0Gb/s | 42C  | [:1]     | 1
c0u3p0  | SSD  | HGST SDLL1HLR076T5CHSHW09XXXXXXXXXXXXXXXXXXX    | 6.985TB  | Online, Spun Up | 12.0Gb/s | 44C  | [:2]     | 2
c0u4p0  | SSD  | HGST SDLL1HLR076T5CHSHW09XXXXXXXXXXXXXXXXXXX    | 6.985TB  | Online, Spun Up | 12.0Gb/s | 43C  | [:3]     | 3
c0u5p0  | SSD  | WDC WUSTR1515ASS201 B9XXXXXXXXAEA                 | 13.971TB | Online, Spun Up | 12.0Gb/s | 47C  | [:4]     | 4
I agree that 8 HDDs would probably be too much power draw for the single 6 pin PCIe connector. I'm hoping that 4 plus the 3 SSDs are within the wattage/amperage. I know the SAS HDDs really only pull a high amperage at spool up so that will be the test. I was also looking at swapping the power supply out for a standard ATX modular unit, but the motherboard on the T140 doesn't use a standard ATX power connector. It has a tiny 8-pin connector and the usual 4-pin cpu connector. I did notice a bundled up 6-pin connector that isn't used. I may probe those to see what the voltages are and maybe I'll splice into that for SSD power if they have the 5V, 12V and ground.
 

billbillw

New Member
Feb 5, 2018
26
14
3
53
I went ahead and did the T-splice and solder this afternoon on the OEM Dell cable (adding 3 extra SATA power connectors from an old PC power supply). It was a success. Testing with seven old HDD and it was stable at POST with all drives spinning up without any crash or warnings. They all show up in the disk management menu. So happy to have seven drives in this server. I thought about using the BOSS card to add some M.2, but I think this will be fine for now.
20241022_172606 (800x600).jpg
 
  • Like
Reactions: ElCoyote_

billbillw

New Member
Feb 5, 2018
26
14
3
53
Happy for you that it works! Would you have pictures of your mod?
No photos. Just a standard T-splice for each of the 4 wires soldered and wrapped with heat shrink. I spliced about halfway between the board connector and the first drive connector. I used the technique of spreading the wire insulation, using a pick to place a hole in the wire, threaded the wire through and wrapped in in a spiral, then soldered/heat shrunk. Its all inside the server now working fine. I ended up only using 2 SSDs in addition to the 4 large drives in the built in bays because I decided to use a BOSS-S1 for my boot drive(s). The two SSDs are just tucked in loose between the drive cages and the optical drive at the top. It won't be moved at all.