I stumbled upon this article on Exxact's blog that covers the advantages of using PLX chips and single root PCIE complexes for GPU deep learning etc.
Example 4 highlights their new "Tensor TXR414-1000R" system which can apparently take up to 20 GPU's, all while using 5 PLX switches to essentially eliminate any previous P2P bottleneck (besides the limits of PCIE obviously).
Example 4 highlights their new "Tensor TXR414-1000R" system which can apparently take up to 20 GPU's, all while using 5 PLX switches to essentially eliminate any previous P2P bottleneck (besides the limits of PCIE obviously).
"It takes the concept of PCIe P2P communication to the limit supporting full P2P communication across up to 20 GPUs. This system can incorporate 10 standard Titan-X or M40 GPUs or, with modified single width high performance heatsinks, 20 single wide Titan-X or M40 GPUs in a single system image."
Other than this odd blog post, I can't actually find any info on this. The system that is linked in the article takes me to the "TS4-264546-DP2" system, which just going off specs, seems identical to Supermicro's 4028GR-TRT2.
Original blog post: Exploring the Complexities of PCIe Connectivity and Peer-to-Peer Communication
Is this real or is this an April fools joke from 2016?...
Other than this odd blog post, I can't actually find any info on this. The system that is linked in the article takes me to the "TS4-264546-DP2" system, which just going off specs, seems identical to Supermicro's 4028GR-TRT2.
Original blog post: Exploring the Complexities of PCIe Connectivity and Peer-to-Peer Communication
Is this real or is this an April fools joke from 2016?...