So some thoughts, mainly around my experience with bifurcation solutions.....
One of the main objectives for me is the 'one box to rule them all' principle. Leaving aside concerns about a single point of failure, I have been building servers that run multiple VMs, some headless for home automation but several to drive workstations on remote screens throughout the house from VMs through HDBaseT solutions.
The main challenge to this is that, needing a GPU and USB controller (at least) per vm, you run out of motherboard and case slots pretty quickly!
The solution here is bifurcation, but chassis space is still the challenge.
I know OP has seen it, but for anyone else, here's a blog post about the kind of setup I like to build
For me, with up to a dozen expansion cards to shoehorn in, some double width, space is the key concern.
A single chassis design with over-specced PCI mounting points is the holy grail. To achieve this, moving the PSU forward in the case is a great solution. This allows the MB to be shifted to one side and all of the remaining back panel to be dedicated to slots, slots, slots. Again, I know OP has seen my impressions of the X-Case X465E
, but for me that was almost the ideal solution.
However, even that didn't provide enough space, and even though it's very long, I found it cramped to work in.
My next add-on was to move to a 2 box solution - the X-Case as the main server, but with an additional tethered box to provide storage capacity and PCIe overflow. I modded an Silverstone Grandia GD07
for this (love that case and the removable and flexible drive cage). I removed the back panel and added this full-length PCI bracket arrangement. More than enough PCIe mounting points there! (all this is missing is a blanking plate on 2 or 3 of those slots to house an IEC socket for power).
And I think such a 2-box solution has merit. I loved the design of the old Coolermaster HAF Stacker
, where an optional add-on chassis was available for huge flexibility. ThermalTake also do this with the sadly discontinued Core X9
(i have one, cavernous, but have never been able to find a second). They also do the P200 pedestal
which is an add-on for the W200.
What I'm getting at here is, I'd love to see a chassis system that's modular and flexible. With maybe something as simple as replaceable back panels, a single (stackable?) chassis could start out as a basic server chassis with support for up to eATX, some storage, flexible PSU positioning and as many PCIe mounting points as possible. Then, as second chassis could be added that would provide more PCIe mounting points, additional storage, as second PSU (to power the chassis itself, or for overall redundancy.
Key to this design would be routing for cables between chassis. Any of the DIY solutions I've built to date tend to have a mess of cables running out the back of each of the units. With some (capped) slots top and bottom of the chassis, they could be stacked and cables run internally. Indeed, one idea would be completely removable top and floor in the case. Then a 3U base unit could be easily expanded to become 6U. Or 2/4, or even 4/8.
Anyway, that's probably enough rambling.
Just for reference, I've ended up currently with a Phanteks Enthoo Pro II
. It's an amazing case that I'm very, very happy with. It's one of those that supports a secondary mATX system but is super flexible, supports 2 PSU positions and brilliant storage configuration options. Right now, I've leveraged it to provide 3 separate PCIe mounting locations as highlighted below, and that gives me all I need in terms of space to locate the cards I need.
One final thought,. One of the most frustrating things about cases is obsolesce. (ThermalTake, I'm looking at you). Often, proprietary case accessories such as drive mounting cages are impossible to purchase, or the entire case goes EOL etc. Best approach would be as much standardisation as possible to allow for cost effective availability and support of aas wide a range of accessories as possible.