Maching Learning/ Nerual Network GPUs

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Patrick

Administrator
Staff member
Dec 21, 2010
12,511
5,792
113
We now have 6x dual socket blades in the STH lab each with 2x GPU support.

I saw a big search engine (not Google) is using chassis with 8x "Geforce GTX" GPUs in each chassis. I am guessing they are Titan (X/Z) GPUs, as judging by the setup, it was not inexpensive or low power.

The Tesla M2090's are dirt cheap (you can best offer these for $75 ea no problem): Nvidia Tesla M2090 6GB GDDR5 SDRAM PCIe Gen2 x16 Graphics Processing Unit but @Patriot let me know that they are not compatible with the latest CUDA.

Now, the other side to this whole thing is that GTC is just around the corner. Maybe I wait until then?

Does anyone happen to have benchmarks that I should start out by scripting? E.g. Linux-Bench-ML? It is time to start working on this stuff.

If anyone wants to play with a node that has 2x M2090's, I can probably set that up next week.
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,511
5,792
113
Yea I saw that. Just wondering if you have thoughts on canned benchmarks (unless that is it.)
 

Ramos

Member
Mar 2, 2016
68
12
8
44
I've done M-L for 2 yrs now, but am way too buzy till after May to do anything because of a very tight deadline.

- But after that I could try and implement SIFT/HoG descriptors and some other real-life-useful stuff for M-L using GPU programming, if you like?
(Descriptor calculations take a TON of time on huge image data sets and is quite new for GPUs (earliest paper from 2009) and I have not seen it used in benchmarks yet)

- We could/should toss in a cracker for RAR passwords too. A nice use case was LinkedIn's 90% crack of 6.5M user passwords,
25-GPU cluster cracks every standard Windows password in <6 hours

I have one made that ran on my 770 GTX for fun and cracked all 1-8 long passwords pretty fast.


Puget systems had a good piece with a lot of GPU stuff that could be used for benchmarking,
Molecular Dynamics Performance on GPU Workstations -- NAMD
(Donald Kinghorn is a legend for me, I love his HPC blog)

Ah it was "us" who had that article on that GPU server lately,
Supermicro 4028GR-TR 4U 8-Way GPU SuperServer Review
It take it you mean stuff that isn't in the above article here?

BidMach had some Spark benchmarks, (I may have stuff there for autumn isch in that area too),
BIDMach: Machine Learning at the Limit with GPUs