I'm familiar with Tim's guide. My experience in running things 24x7 in my startup business was a bit different than his blog.
I think you'll find really good info at Puget System's HPC blog.
As far as CPU's go, there really isn't anything better than Xeon E5-v4 series in the "used" parts bin, especially the low-core-count. They are incredibly reliable, don't run hot and the NUMA architecture on a PLX'd motherboard allow for as many as 4 gpu cards with excellent memory/pcie bandwidth. The Asus X99-E WS series motherboards are really good. Loaded up with a frequency optimized Xeon and ECC memory, they're really hard to beat. I ran lots of x4 1080ti systems on those. I also ran the Titan-V in x4. I was really ticked when I found that Nvidia clock-crippled them in compute mode. They were actually slower than 1080ti on compute, especially FP32. The FP16 was only somewhat better than running some models in FP32 on the 1080ti. The Volta GPU really shine the best at FP64, for other HPC use. Most deep-learning models train nicely at FP16. The other big bottleneck on the Titan-V is the lack of NVLink. NV did un-cripple the Titan V driver a little after I sold my last set of them.
The Turing cards were a better deal than the Titan-V. The Titan RTX being the best of the bunch. The extra DDR VRAM helps a lot.
The other thing I found is that most power supplies are not up to the ballgame, no matter how many watts they're rated. Even some well-respected brands with ample power rating don't hold up to the "surge" nature of DL training. It isn't like gaming or rendering where there's a steady load. The DL batches get loaded in card memory and run in 1/2 to 1-second long "surges". The EVGA Super Nova 1600W handles it well. Second place going to the Seasonic Prime 1300W. I've tried other PSU's with poor results, especially after many hours of work.
If you look inside the Volta version of the Nvidia DGX Station, it is built on an Asus X99-E WS/10G motherboard and an EVGA Super Nova 2 1600W power supply, Xeon E5-2699v4 or 2698v4 CPU. The Volta cards used in it are proprietary, but very much like the Quadro GV100 (16gb or later 32gb each) with special firmware and watercooling blocks (EKWB I'm pretty sure). The unobtanium part is the NVLink bridge that connects the 4x cards.
I'm running a few configs now, either 2x Titan RTX or 2x 2080ti blower cards. The fan coolers on the Titan RTX limit things a lot, the NVLink only being a 2-card arrangement also limit things. That's OK, because I'm also running Mellanox 25gb ethernet with RDMA and that uses a full PCIe slot.
Even with just 2 cards in the workstation, at full speed, it will make the lights in the office blink. Two running systems with 2x GPU's each (which I run on the 25gb network) will pop the 15 amp breaker for my home office. I usually power cap the GPU's with the nvidia-smi command to keep things cooler and not pop the breakers.
It looks like the current generation of AMD has a lot going for it, were I buying new, I'd probably be looking at them. The Xeon Skylake series are hot running and have a funky PCIe root complex that didn't get along well for peer memory or RDMA. The Skylake HEDT (consumer) didn't pan out at all for me. Hot running and nearly all the MoBo's are gamer oriented, with all kinds of over-clocking nonsense that makes for unstable systems.
For the OP's question: Just pop the 3080 in and see what it does. I'd guess it will do fine. You can compare benchmarks at the Puget Systems blog.