Optimizing a Starter CUDA Machine Learning / AI / Deep Learning Build

  • Thread starter Patrick Kennedy
  • Start date
Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

MiniKnight

Well-Known Member
Mar 30, 2012
3,072
973
113
NYC
I log in today and see Xeon Phi and a CUDA machine. Is this a harbinger of things to come? I'm more interested than knowledgeable.
 

RobertFontaine

Active Member
Dec 17, 2015
663
148
43
57
Winterpeg, Canuckistan
The 6GB seems like a reasonable decision price/value wise. Surprising that you didn't attempt to arrange 4 1080's or pre-release TI's but your restraint is appreciated. ;)

NVidia has taken carving up the market to the extreme in the last few years (to their benefit) but not necessarily to those of us who haven't purchase stock. fp64 has gone in the toilet in exchange for fp32 in the last few years and we are starting to see fp16 (sounds oddly like an integer to me). While this actually works quite well for DNN, not all machine learning is neural nets and for financial analysis those fp64's are often very handy. The GTX780ti is likely to outperform this latest and greatest consumer lineup in fp64 and the only alternatives are multi-thousand dollar specialty gpgpu's.

On one hand the performance of the specialty cards is stunning and the retail cards are providing a heck of game playing experience but the "do it all"cards like the original Titans have been abandoned for market segmentation.

Intel is heading one step further with the recent purchase of Alterra for their FPGA's and Google is spec'ing it's own silicon so NVidia isn't alone hear but I miss the value prop that Fermi brought to the table.

Very exciting times for those who can afford to play.
 
  • Like
Reactions: Patrick

Patrick

Administrator
Staff member
Dec 21, 2010
12,511
5,792
113
The 6GB seems like a reasonable decision price/value wise. Surprising that you didn't attempt to arrange 4 1080's or pre-release TI's but your restraint is appreciated. ;)
Yea for $230 or so it seemed like a decent option.

The other bit I realized is that it is not used 24x7 and 100% so I really do believe that lower spec is fine for learning.

A far cry from the Baidu AI Lab a few racks down from us in the DemoEval lab.
 

RobertFontaine

Active Member
Dec 17, 2015
663
148
43
57
Winterpeg, Canuckistan
Yes cuda is everywhere but I kind of like the idea of opencl... and I have xeon phis, xeons, and miscellaneous hardware the I like the idea of integrating and cuda is everywhere that nvidia is.