R720 connector documentation wrong?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

ericloewe

Active Member
Apr 24, 2017
344
168
43
32
Icy Dock also makes similar things that will do hot-swap, which is cool (probably more so than actually useful, in practice).
 

MPServers

Member
Feb 4, 2024
41
20
8
Dude I think you're gonna burn someone's stuff up! I checked the pinout of the r720 and it appears to be as follows for this 8-pin connector and 4-pin next to it:

View attachment 26640

The white may be actually orange (+3.3v). But don't quote me on that one. The yellows are +12 and red +5 of course.

Did you test your cord you bought? I bet that could really fry some equipment man.
FYI, don't use a cable like that. I was curious and checked the voltages on my R730 on that mini 8-port power plug and it's just 12V and a 3.3V. No 5V anywhere. In other words, it's not the same as an Optiplex 3020 which that cable was designed for. Might have the same physical plug but voltages are way different.
 

shor0814

New Member
May 27, 2024
29
9
3
The j_bp0 connector is really the power connector for the flexbay available on the 720xd and 730xd chassis. This adds a couple of extra sff drive bays to the back of the server. When reading the 730 manual it calls the j_bp0 connector a backplane connector and that is where the power comes from for the flexbay. Probably just a typo.

I used the same drive caddy on my 730 that @pilfos used, makes a great place for a boot drive and I didn't have to snake any extra cables around an already crowded chassis. The only complaint, the cable needs to be longer to replace a drive without pulling the cover off the server. I am sure there is an extension cable somewhere!
 

MPServers

Member
Feb 4, 2024
41
20
8
I bought a couple of those drive caddies to replace the optical drive. They work great. On the stock optical drive, there's a plastic clip thing screwed onto the back for retention, and the new caddies I got had screw holes in the same place (must be a standard), so I swapped over those retention clips to the caddy and it works great. That means I have to take the lid off anyway if I want to pull the drive, and I'm fine with that. I plan to use that spot as a boot drive and then the hot plug bays for storage. No plans to hot swap my boot disk. :)

Other things I've looked at doing, but haven't quite yet, are to get some SFF-8643 to SATA breakout cables. I have the PERC controller hooked to the 8LFF front drive bays in my system, which freed up the two 8643 SATA ports on the motherboard. So with a pair of those I can try tucking away another 8 SSD drives somewhere inside. All I need is 5V power for them which is tricky.

The optical power port / tape-backup unit power has 5V and it works fine for a pair of SSD drives using those 2 standalone SATA ports next to it, but I don't think it could provide a lot of current for more.

However, I'm looking at just adapting a GPU cable and taking the 12V from there, going into a 5V buck converter, and wiring that to a nice little daisy chain of SATA power connections (maybe a couple x4 since I doubt I'd find a single x8 cable, and might see weird signal loss).

I think just one of those 12V GPU lines can supply 50W, so bucking that to 5V @ 10A (with a fudge factor for power loss in the conversion) seems fine, and I think 50W can run 8 x 2.5 SSD just fine, figuring maybe 5'ish W max on each, typically less.

I found an ebay seller that has cables that go directly from a 6/8 pin GPU connector, through a buck converter, out to 1-4 SATA power plugs. It didn't list the current/wattage specs so I'm trying to get that info, but it's reasonably priced. If it can handle 20-25 W then I can just do that and get a couple of them.

Another tricky thing is trying to find a proper right-angled SFF-8643 to SATA breakout. One of those on the motherboard *needs* a right angle to avoid hitting the fan shroud (the other one is near riser 1 and might interfere if you had a longer card, but would be fine in my case). I found some on AliExpress that would work. There's an amazon seller that has some but I couldn't find any way to specify the right angle/right exit version when ordering so I'm not sure what's up with that.

Final issue is just where I'd put all the drives. On top of the power supplies where riser 3 cards would be is a lot of space (that's where the optional 2-drive SFF cage can go on a R730XD anyway, and without the cage I bet you can stack quite a few... I should dry fit to see). That means no cards in riser 3 though. I could also pop some in the riser 1 area. I plan to have a GPU and/or 4x NVMe PCI adapter in riser 2 in the x16 slot, but riser 1 would be mostly empty except for a 10Gb adapter. And I could always replace that with a 2 x 1GB + 2 x 10Gbe SFP adapter in the mezzanine slot. Cheap on ebay at the moment, like $14-15 USD.

End result of all this would be 8 LFF up front controlled by the PERC, 2 x SATA using the regular SATA II ports, and then 8 x SATA using the SFF-8643 ports with breakout cabling. That'd be pretty nice... 18 total drives stuff away. Plus 4 x NVMe in the x16 port (with the bifurcation enabled) and maybe even some 2x NVMe adapters, space permitting, in other slots. :)

All of this would be more along the lines of "for fun", because currently this particular server has an LSI adapter hooked to a 2U24 external chassis, with 24 x 2.5" drives. I'm just looking at ways to consolidate into a single 2U size "just because I can", and eliminate the dependency on that external unit. If I really wanted to max out my storage, I have some other old Netapp shelves I could throw on there, but at a certain point I have to ask "how much storage is enough" ? LOL
 
  • Like
Reactions: gamerkonks

shor0814

New Member
May 27, 2024
29
9
3
I bought a couple of those drive caddies to replace the optical drive. They work great. On the stock optical drive, there's a plastic clip thing screwed onto the back for retention, and the new caddies I got had screw holes in the same place (must be a standard), so I swapped over those retention clips to the caddy and it works great. That means I have to take the lid off anyway if I want to pull the drive, and I'm fine with that. I plan to use that spot as a boot drive and then the hot plug bays for storage. No plans to hot swap my boot disk. :)
I don't plan on hot swapping my boot disk, but, due to the tight confines of the rack location, and my wife's stuff (which shall not be disturbed) the removability is helpful in avoiding the wrath of the wife :eek: In fact, I had to replace a drive just the other day, probably not the drive's fault, more something else was acting up and I waited until she was gone to move her stuff.

As far as SSD drives, one thing about the 730, there are a lot of PCI slots and a lot of PCIe lanes and great bifurcation support in the bios. I currently have 3 PCIe adapters that can handle 8 m.2 NVMe SSD's without the need for power adapters and still have room for 4 more cards of various capacities. TrueNAS Scale recognized the drives so that is a plus. The downside, I haven't tried filling up the cards because I don't have enough NVMe drives to really test performance. Quite the problem to have I guess.

Now you have me thinking, I might have to open the cover and see where I might stash some drives as I have a few 8643 splitters. My mezzanine slot holds a 2x40G card. All in, this is a pretty decent storage server for the price given the clever use of space.
 

MPServers

Member
Feb 4, 2024
41
20
8
I don't remember if STH lets you add links to fleabay or whatever, but in case it does, here's a link to the 12V GPU power to 4 x SATA ports, with a little inline buck converter. I still haven't heard back on the total output power available from this, and it assumes you would already have a cable to convert the riser card GPU to a standard 6/8 pin GPU power (not the 12V EPS cables that Dell supplies for things like the Teslas or whatever).

Alternatively, there are some 12V lines on that 8-pin connector that you definitely shouldn't use as described above, so maybe with one of those cables and some repinning of the 5V adapter, you could use that. Or repin that adapter straight into the riser's GPU power port, which is conveniently the same EPS 12V form factor, just with different pinouts on the riser side as on the GPU side. So you could be creative and instead of going from riser to gpu and then gpu to adapter, repin it and cut out a cable in the process.

And you're right, there is a lot of room in these things to stuff bits and pieces here and there if you're being creative. I'm in the same boat as you when it comes to NVMe. I know I could fit a lot of them into the PCIe slots. I have the x4 16-lane adapter which works great in slot 4 with the 4x4x4x4 bifurcation. I assume the adapters that hold 2 NVMe drives and need 4x4 bifurcation would work well in the other slots. Riser's 2 and 3 support full height and riser 1 just the half height cards, but I think those 2 NVMe adapters are only half height anyway. I'm just thinking how nuts (or awesome) it would be to have 6 of the 2 drive adapters, 1 of the 4 drive adapters, and thus have 10 NVMe drives hanging out in the PCIe section. Alas, I only have a handful of spare NVMe drives at the moment, but lots of 3.5" and 2.5" HDD and SSD, more than I can shake a stick at.