Intel Xeon Phi Coprocessor 31S1P Special Price Promotion of $195*

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

epicurean

Active Member
Sep 29, 2014
785
80
28
Can anyone share which supermicro motherboards will this actually work with?
Interested to use this for cryptomining
 

Robert Fontaine

Active Member
Jan 9, 2018
113
28
28
57
It mostly depends on what size chassis your are using any of the x9 x10 boards will work.
the x9drg / x10drg are designed for render farms but they are bigger than eatx form factor so you need a huge chassis..
I have a monster lian li case for my x9drg and my x10drg has been in a box on the shelf for almost a year waiting for ddr4 to stop being so expensive.

You are going to need to ensure you have adequate power and and cooling for this kind of rig.
There is a very sexy example of a x10 quad gpu rendering box on this site that ticks most of the boxes.
 

brodonalds

New Member
Jan 22, 2018
9
4
3
47
Anyone who has gone the tesla route happen to know if there is a rail kit for them? i looked around and didn't see any but i didn't look too hard tbh....
 

forroden

Active Member
Jan 1, 2017
53
46
28
Anyone who has gone the tesla route happen to know if there is a rail kit for them? i looked around and didn't see any but i didn't look too hard tbh....
For the S1070, quad card thing? Yeah, part number is 320-0351-000. They are pretty much my least favorite rails in the universe.

Those boxes are also hot, loud and pretty silly. I thermalled a couple of my servers just by adding one S1070 to the room. They aren't even close with it turned off. Haven't had it back on since tbh so it could have just been me being stupid.
 
  • Like
Reactions: brodonalds

Xamayon

New Member
Jan 7, 2016
25
14
3
With the fan modification they are pretty tolerable noise wise, you can lower the speed a great deal and still cool the phi cards adequately. It's basically as loud as a typical 1u server at that point. They do put out a lot of heat under load, (~250 watts for each phi, and the standard PSU inefficiencies, so what can you expect) and even at idle if you don't tell the phi cards to go into their low power idle state (which is not enabled by default and does not happen automatically)

I havn't tried rack mounting them properly, just stacked on top of other servers. If the nvidia rails are bad, you could always get the rail type which has semi-shelf like protrusions from each rail for holding up the server. They are commonly used for UPSs and such.
 

kiteboarder

Active Member
May 10, 2016
101
48
28
45
@Xamayon said: "They do put out a lot of heat under load, (~250 watts for each phi, and the standard PSU inefficiencies, so what can you expect) and even at idle if you don't tell the phi cards to go into their low power idle state (which is not enabled by default and does not happen automatically)"

Can you please explain how to put these cards into their low power state?

Thanks!
 

Xamayon

New Member
Jan 7, 2016
25
14
3
@Xamayon said: "They do put out a lot of heat under load, (~250 watts for each phi, and the standard PSU inefficiencies, so what can you expect) and even at idle if you don't tell the phi cards to go into their low power idle state (which is not enabled by default and does not happen automatically)"

Can you please explain how to put these cards into their low power state?

Thanks!
There's a GUI thing built into the Intel MIC stuff which can enable the low power states, but there are also command line config utilities in there too if the GUI is not usable for whatever reason. I don't remember the specifics, but the commands are covered by the documentation pretty well. One key thing is that if you have any of the MIC monitoring applications running, or are interacting with one or more cards, those cards will not go into the low power states. In my experience, they need to be completely idle and untouched. This may have changed somewhat with later software/hardware versions, my experience has been with a pretty old install on CentOS 6.
 

Stefan2k4

New Member
May 17, 2018
7
3
3
51
I know this threads a little old, but for anyone reading this and interesting in the idea of using the Tesla compute server chassis as a PCIe expansion unit, Here's a tip. Look for the old S870 versions first. Most of those have probably wound up in the land fill by now as they had the really old C870 cards in them and have long been obsolete. However, those don't seem to pay any attention to what kind of cards are installed in them and will not power down if they don't detect at least one Tesla or Quadro card. That means you don't have to leave one Tesla card in them as you do with the later units. They are also a bit longer, because the C870 cards and heatsinks were longer than the later Tesla cards. Otherwise they seem to be about the same.
 
  • Like
Reactions: anoother