I don't work in a large data center environment anymore, but even back 2 decades ago working on Sun/Compaq/SGI/DEC servers, I don't recall fan failures being a big enough issue to even discuss. I'm sure it must have happened with the thousands of servers we had, but there were bigger problems to focus on that it never came up as an operational issue that I can recall. It was always neat to see the mechanical designs that allowed easy swap of fans.
The data center I worked in 2 decades ago usually had the servers racked with cable arms where available, so that servers could be pulled out of the rack while remaining online. Not all server vendors had that back then, and that seemed like a really useful feature; having to go all the way down the aisle to get to the other side of the row of racks just to unplug everything and then walk all the way back to the front side was a pain. But, I've seen more and more pictures where servers are no longer being racked like that, with minimal cabling requiring disconnecting before pulling the server out - making the point of hotswap fans useless.
These days, I've been involved more at the system and software application level of things. And often times, we are designing applications to run across multiple nodes and bringing most of the high availability or redundancy into the code rather than depend on the hardware to survive faults. So, in those cases, provided it is done well in the application layer, there's no longer a strong need to keep high up-times of the individual servers. The applications are designed to smoothly handle nodes going offline. I would imagine with infrastructure automation / cloud orchestration, the need for high up-times is becoming less and less important. So, questioning of the value of hot-swap fans may also apply to other components as well?