View attachment 42055
I have received the GPUs, the server to host the GPUs, and the heatsinks.
The heatsinks are adapted onto the cards with a special bracket made by a seller on Xianyu.
I am running these GPUs on Inspur's NF5468m5, it supports up to 8 SXM2 GPUs.
I can confirm NVLink does not work, because they are missing the traces on these GPUs.
They are a bit too tall for the case so I am going to have to substitute with acrylic or something with cut outs, but the chassis provides enough power and cooling for them. The cards do run hot, and the auto fan tuning does not work well with these GPUs since they do not report their temperature back to BMC.
The heatsinks are also super scary to install, the screws require a substantial amount of force to mount, I have used 5 layers of 0.5mm pads (I know they don't work well stacking with each other, but that was all I had) to cool the VRMs.
The cards draw upwards of 400W each, so be prepared to rip those fans, in my testing so far 50% keeps them cool if the environment is 21-22 Celsius intake. Keep in mind I only have 4 of them currently, so it may change if you use more of them. The GPUs also fall of the bus when they reach just over 100C, so cooling is definitely a point to take note of.
I have not had any success power limiting them via nvidia-smi.
My seller also tells me they do not recognize on Dell servers, so I went with this Inspur model which seems to work alright aside from GPU temperature communication.