Somebody did a test and found that there is no noticeable difference between PCIe3.0 16x16x16x16 and 8x8x8x8. Probably they were talking about frame rates. Does the conclusion apply to deep learning as well?
I am going to get a 1080Ti first and see how it goes. May upgrade to three more 1080Ti or get four better GPU later. In this case, how many cores in the CPU is recommended?
Thanks. What is the recommended number of cores and threads? Somebody also suggested getting a CPU with cores that are twice the numbe of GPUs would be sufficient!? Can't decide between Intel and AMD.
Still trying to decide which CPU to buy. Some mentioned that machine learning and deep learning programs such as Tensorflow and Python are single threaded. Having a good GPU such as 1080Ti is more important than having a high-end CPU. Others mentioned that it is good to get multi-core...
Now that Threadrippers have been around for over a month, is it recommended to get the Threadripper 1900 or the i7, i9 or the Xeon W2125/2135 instead? I probably get one 1080Ti and perhaps upgrade to four Nvidia 1080 Ti running at x16x16x16x16 eventually. I may get 64-128GB RAM initially. Any...
I read that DL uses the memory in the GPU. Thus, a computer with 8-16GB is good enough. However, in Kaggle, I heard people using 64, 128 or even 256GB RAM in the computer. This seems to be contradictory. Can anybody please clarify this? Is it recommended to get 64 or even 128GB RAM? Thanks
I guess one advantageous of using the Xeon rather than the Intel i9 and Threadrippers released last month is that we can use dual CPU motherboards that support running 4GPU at x16x16x16x16. Am I right?
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.