Depends on your limitations. Keep in mind with this you still need to buy/acquire the extension cables and the PCIe adapter boards. And probably decently more difficult to adapt to anything other than a mining rack type setup.
A lot of folks will greatly prefer the kind of board that can plug...
a lot of people dont want a 1U server (insanely loud), and also dont want to be stuck to the same platform forever. and that particular server uses a more custom PSU setup. you need the infrastructure to run it. it ends up being a lot more trouble than it's worth.
being able to put the GPUs on...
not anymore. the AOM-SXMV is almost impossible to find anymore. the other models of SXM2 boards (like the Dell version) is hardware locked to their platform and requires special drivers to work and wont work with standard systems.
you can't take the listings on taobao or xianyu at face value...
I wonder what the cause of this is. Since it seems to happen at the very end on several different GPUs.
Is this an unspoken decision made by Nvidia to cheap out on the last memory module(s) like the whole GTX 970 “3.5GB” ram issue? (Which resulted in a class action lawsuit against Nvidia)...
FYI, that ram you have is not compatible with your motherboard or CPU. if you had ever powered it on you would have realized this.
EPYC Milan only supports RDIMM (buffered/Registered).
you have UDIMM (Unbuffered/unregistered)
don't bother selling them as a combo since they are not compatible.
what's your primary use-case? what application(s)?
was it that the whole GPU array wasn't recognized without your oculink connections? or only that nvlink didnt work?
me personally, I only use them as individual GPUs, so each of the 4 GPUs runs a separate instance of an application. but best...
looking at the specs, it’s not clear to me that you need the oculink outputs at all unless you’re trying to access the GPUs directly over the network. Are you doing this? Is there something you’re using it for that doesn’t work without the oculink connection? I’m not using them at all. Only the...
Early adopter then lol.
I like the setups. It would have been cooler if we could use the onboard oculink connectors rather than PCIe adapters, but I’m glad some folks finally made them.
very nice. I bought a set of waterblocks with mine also, but never got around to installing them yet. The 3U air coolers keep things cool enough for me for now.
where'd you get the board from? I've been negotiating with the Chinese sellers and pretty much no one is willing to sell just the...
I have a custom made monoblock. Someone here on STH made a limited run of them. Don’t think they ever made more.
For cooling just the VRMs, you could probably find a suitable sized VRM cooler and find a way to custom mount it.
VRMs way too hot.
reduce the CPU TDP to 225W, or get a lot of additional cooling across the VRM heatsinks.
I watercooled my VRMs for this reason on my H11DSi.
usually 2P systems don't have more exposed PCIe than 1P. AMD uses 64 lanes from each CPU for CPU0<->CPU1 links.some configurations can reduce the CPU-CPU links, and you get a little bit more exposed PCIe, but not anything like double.
see here...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.