NVIDIA Titan V Extracting Value from Deep Learning Enhancements

  • Thread starter Patrick Kennedy
  • Start date
Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Patriot

Moderator
Apr 18, 2011
1,450
789
113
Noticed on Nvidia's cloud signup the other day they are trying to enforce in Eula that you will only be running Teslas in servers.
Edit, Patrick asked where and I don't see it in the cloud signup... I swear I read it in their documentation yesterday but... I read a lot... I'll find it because I need to for... well the project that has me working on teslas.

Sooo who wants to buy my 2u workstation.
 
Last edited:

Patrick

Administrator
Staff member
Dec 21, 2010
12,511
5,792
113
This is why I come to this website.
Ha you are the third person to quote that to me already! That is not the normal STH tone but I decided to put it in.
 

AdrianB

Member
Mar 3, 2017
34
17
8
62
Deep learning might be fashionable, but there are still people like myself, who do not care about deep learning, but who need fast double-precision (i.e. FP64) computations.

For such people, the introduction of Titan V is much more valuable than for those who need deep learning.

For deep learning there are many reasonable alternatives, but for double-precision, since many years, there was only a single choice that was better than the Xeons, both at DP Gflops per dollar and at DP Gflops per watt: the ancient FirePro cards based on AMD Hawaii, like those that I am still using.

Many years have passed and no better product was launched ... until now.

Titan V finally surpasses the old AMD Hawaii. Titan V is more than 3 times faster (double number of arithmetic units and more than 50% higher clock), but slightly less than 3 times more expensive, so it has better DP Gflops/dollar. While being more than 3 times faster, the power consumption is only slightly higher, so the DP Gflops/watt ratio is almost 3 times better.


The improvement over Xeons is even larger, because the best Platinum Xeons have about the same DP Gflops per watt as AMD Hawaii, but they have much worse DP Gflops per dollar.

So the people that commented that Titan V is expensive, have no idea about what they are talking, because from now on, Titan V provides the cheapest way of performing double-precision computations, both in the initial cost and in the power consumption.


Previously, Quadro P100 had good DP Gflops per watt, better than the Xeons, but it was ridiculously overpriced, so it should have been used only by someone for which power consumption was critical but the initial cost was completely irrelevant.
 

Edu

Member
Aug 8, 2017
55
8
8
33
It seems like they are trying to control the sale of this card. It's only available from the Nvidia store - max 2 per customer. Nvidia doesn't want customers installing workstation, or gaming cards in GPU servers, because they lose money if the customer doesn't buy the more expensive Tesla version.
 

i386

Well-Known Member
Mar 18, 2016
4,221
1,540
113
34
Germany
Noticed on Nvidia's cloud signup the other day they are trying to enforce in Eula that you will only be running Teslas in servers.
Edit, Patrick asked where and I don't see it in the cloud signup... I swear I read it in their documentation yesterday but... I read a lot... I'll find it because I need to for... well the project that has me working on teslas.

Sooo who wants to buy my 2u workstation.
From what I read the last few day it's not about servers, but usage of geforce cards in datacenters. If you're using it for deep learning at home or an office it should be fine.

Source: NVIDIA - Download Treiber (german)