I've been dabbling with deep learning for about a year, and I finally hit the point (of frustration) where I'm actually going to invest some time and money into building a deep learning-specific workstation. This post is my attempt to air out what has been rattling around my head, so that more knowledgable people can disabuse me of any crazy notions that I may have grabbed a hold of. I should start off by saying my goal or target is to build a system that will help me become competitive in Kaggle competitions. Kaggle isn't my real goal, it is just a convenient stand-in for the level of performance I'd like to hit with this system. My assumption is that this is a system in the 2-4 GPU range would be about right. As of now, I'm looking at RTX 2070 Super cards as a baseline. My intention is to start with 2 GPU's, and add more as time goes on. I know that I could build this system with either Intel or AMD (Zen2), but I'm favoring the latter at the moment. I also know that most of the work will be done on the GPU, but I want to have a CPU that runs fast for dealing with Python code that doesn't get accelerated by a GPU, and for other compilations. This is where I'm hitting my first quandary. Should I go with Epyc or Threadripper? Both have enough PCIe lanes for four GPU's. My gut wants to go with an Epyc 7302P or 7402P for lower power usage, but I keep looking at the ThreadRipper 3960X in the same price range with a wider assortment of motherboards that could accommodate 4 GPU's and a lot of other "nice" built-in features. I'm looking to run Linux only on this machine, and have no current intention to do anything other than deep learning with this system. I saw mention of docker and VM's in other threads, but I'm a bit too much of a luddite to fathom any advantages this would give in this context. Any thoughts and or suggestions will be greatly appreciated.