I have a pretty large amount of colo in someone else's facility. Hoping to leverage mining to justify stepping up to starting my own in 2018. Mining is a pretty "forgiving client" when it comes to working out any kinks along the way.He runs a hosting facility in Arizona.
Evap cooling works great in Arizona. A single-tenant facility might be able to get by with evap-only if they're careful about how they run the facility and they use appropriate servers (i.e. no 1u heatsinks).
2u heatsinks easily run 10C cooler under load than 1u heatsinks, while using far less power-hungry 80mm fans vs the 40mm's needed in 1U cases. For those doing the math, 10C comes out to 18F. Consider the difference between running a facility at 72F vs 90F, while the hardware runs at equal temperatures. That's a huge power savings and still gives you around 10C headroom on the CPU thermal limits at 100% cpu load at well below 100% fan speed.
Even more importantly than saving power is saving capital costs. If your thermal target is "keep the cold aisle under 90f at all times", you can do without chillers altogether, saving on floor space, installation costs, and maintenance. Ambient air + evaporative cooling will get you below 90f in Arizona pretty much all the time.
Worst-case-scenario, cpus are designed to throttle, and these CPUs run idle at least 20c lower than they do mining full-out. Certainly you'd want a better contingency plan than that, but realistically there's quite a bit of leeway.
You can really only get away with this if you control things end to end. I don't know of any colo facilities that would consider having a "No 1u servers" policy. And you'd also need to put some extra thought into keeping your network gear cool enough. Either selecting hardware that can run with hotter inlet temperatures than usual, or keeping the switches cooler than you keep the servers.
The payoff is that you can probably achieve a PUE in the ballpark of 1.2 or less if you go through the effort. And, the servers themselves should use 10% less power due to using more efficient 2U / 2-node models. Server fan power doesn't count in PUE calculations but it is an important factor. Overall it shouldn't be hard to get your total costs to under half of what it costs to colo, provided you have sufficient scale to justify the fixed overhead and setup costs.
Facebook does this in New Mexico. The form factor of opencompute is not a coincidence -- it provides sufficient height and width for efficient heatsink design, with 2x 2-CPU servers in one chassis.