I think you forgot to mention a couple Things for your Comparison:Using 16x 3.84 rather than 4x 15.36 also comes with additional cost from cables/adapters to potentialy needing to replace your platform for more lanes.
Saving 15$/tb comes back to bite you if you are spending 18$/tb on a new platform and to connect them.
You can also be looking at 16x 18-25w vs 4x 16-20w
(Im usualy looking at a build for my lab with 2-3year runtime in mind)
But servers with E1.L bays is the one thing id really like to see drop in price on the used market for sure.
Sure i can connect them with a adapter to U.2 and just cable from there, but id like to slot them in a bay rather than some DIY mounting sideways behind a fan row.
(That is what ive been considering to stick 2 E1.L drives in each of my whitebox storage nodes)
a. Risk Management: if 1 x 3.84TB breaks out of the 16, you lose 100 USD, if one 15.36TB breaks out of the 4, you lose 600 USD
b. Usable Capacity and of course you could Argue RAIDZ2 or RAIDZ3 for 16 (or 2 x RAIDZ2 of 8) vs Striped Mirror or RAIDZ2 of 15.36TB, you'd use 200-400 USD for Parity of the 3.84TB vs 600 USD for the 15.36 TB
Agree on Cables and Power Consumption, several smaller Drives also has Issues.
But it's not like 15.36 TB is the Perfect Solution everywhere.