I bought a couple of those drive caddies to replace the optical drive. They work great. On the stock optical drive, there's a plastic clip thing screwed onto the back for retention, and the new caddies I got had screw holes in the same place (must be a standard), so I swapped over those retention clips to the caddy and it works great. That means I have to take the lid off anyway if I want to pull the drive, and I'm fine with that. I plan to use that spot as a boot drive and then the hot plug bays for storage. No plans to hot swap my boot disk.
Other things I've looked at doing, but haven't quite yet, are to get some SFF-8643 to SATA breakout cables. I have the PERC controller hooked to the 8LFF front drive bays in my system, which freed up the two 8643 SATA ports on the motherboard. So with a pair of those I can try tucking away another 8 SSD drives somewhere inside. All I need is 5V power for them which is tricky.
The optical power port / tape-backup unit power has 5V and it works fine for a pair of SSD drives using those 2 standalone SATA ports next to it, but I don't think it could provide a lot of current for more.
However, I'm looking at just adapting a GPU cable and taking the 12V from there, going into a 5V buck converter, and wiring that to a nice little daisy chain of SATA power connections (maybe a couple x4 since I doubt I'd find a single x8 cable, and might see weird signal loss).
I think just one of those 12V GPU lines can supply 50W, so bucking that to 5V @ 10A (with a fudge factor for power loss in the conversion) seems fine, and I think 50W can run 8 x 2.5 SSD just fine, figuring maybe 5'ish W max on each, typically less.
I found an ebay seller that has cables that go directly from a 6/8 pin GPU connector, through a buck converter, out to 1-4 SATA power plugs. It didn't list the current/wattage specs so I'm trying to get that info, but it's reasonably priced. If it can handle 20-25 W then I can just do that and get a couple of them.
Another tricky thing is trying to find a proper right-angled SFF-8643 to SATA breakout. One of those on the motherboard *needs* a right angle to avoid hitting the fan shroud (the other one is near riser 1 and might interfere if you had a longer card, but would be fine in my case). I found some on AliExpress that would work. There's an amazon seller that has some but I couldn't find any way to specify the right angle/right exit version when ordering so I'm not sure what's up with that.
Final issue is just where I'd put all the drives. On top of the power supplies where riser 3 cards would be is a lot of space (that's where the optional 2-drive SFF cage can go on a R730XD anyway, and without the cage I bet you can stack quite a few... I should dry fit to see). That means no cards in riser 3 though. I could also pop some in the riser 1 area. I plan to have a GPU and/or 4x NVMe PCI adapter in riser 2 in the x16 slot, but riser 1 would be mostly empty except for a 10Gb adapter. And I could always replace that with a 2 x 1GB + 2 x 10Gbe SFP adapter in the mezzanine slot. Cheap on ebay at the moment, like $14-15 USD.
End result of all this would be 8 LFF up front controlled by the PERC, 2 x SATA using the regular SATA II ports, and then 8 x SATA using the SFF-8643 ports with breakout cabling. That'd be pretty nice... 18 total drives stuff away. Plus 4 x NVMe in the x16 port (with the bifurcation enabled) and maybe even some 2x NVMe adapters, space permitting, in other slots.
All of this would be more along the lines of "for fun", because currently this particular server has an LSI adapter hooked to a 2U24 external chassis, with 24 x 2.5" drives. I'm just looking at ways to consolidate into a single 2U size "just because I can", and eliminate the dependency on that external unit. If I really wanted to max out my storage, I have some other old Netapp shelves I could throw on there, but at a certain point I have to ask "how much storage is enough" ? LOL