We now have 6x dual socket blades in the STH lab each with 2x GPU support.
I saw a big search engine (not Google) is using chassis with 8x "Geforce GTX" GPUs in each chassis. I am guessing they are Titan (X/Z) GPUs, as judging by the setup, it was not inexpensive or low power.
The Tesla M2090's are dirt cheap (you can best offer these for $75 ea no problem): Nvidia Tesla M2090 6GB GDDR5 SDRAM PCIe Gen2 x16 Graphics Processing Unit but @Patriot let me know that they are not compatible with the latest CUDA.
Now, the other side to this whole thing is that GTC is just around the corner. Maybe I wait until then?
Does anyone happen to have benchmarks that I should start out by scripting? E.g. Linux-Bench-ML? It is time to start working on this stuff.
If anyone wants to play with a node that has 2x M2090's, I can probably set that up next week.
I saw a big search engine (not Google) is using chassis with 8x "Geforce GTX" GPUs in each chassis. I am guessing they are Titan (X/Z) GPUs, as judging by the setup, it was not inexpensive or low power.
The Tesla M2090's are dirt cheap (you can best offer these for $75 ea no problem): Nvidia Tesla M2090 6GB GDDR5 SDRAM PCIe Gen2 x16 Graphics Processing Unit but @Patriot let me know that they are not compatible with the latest CUDA.
Now, the other side to this whole thing is that GTC is just around the corner. Maybe I wait until then?
Does anyone happen to have benchmarks that I should start out by scripting? E.g. Linux-Bench-ML? It is time to start working on this stuff.
If anyone wants to play with a node that has 2x M2090's, I can probably set that up next week.