Overheating problems with 2x Epyc 7742 in Define 7 XL 1TB/2TB RAM

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

gabe-a

New Member
Sep 10, 2020
26
6
3
Alright, time to tackle the RAM VRMs since all components have arrived (sandpaper, phobya ("fear it!"), and the baby copper heatsinks).



Just to be sure I'm doing it right:
1. Sand flat side of the copper heatsinks with grain 80-400 sandpaper (Maybe I'll use 200).
2. Mix 4x the black goop with 1x the yellow goop (how? where to mix? how to tell if it's the right ratio? What if I accidently do 1:8 or 1:2?)
3. Apply goop to the copper heatsink (how? Flip the heatsink over and rub them into the goo? How much goo should I add? Never, ever done anything with hardware + goo, so I don't want to fry the motherboard as I have no idea how to reinstall a new one!)
4. Stick the heatsink onto the VRMs (I think I can do this! Just grab by the spiky part and stick it down hard onto the VRM and pray no goop goes squirting on the motherboard)

That about sum it up?

For the CPU VRM, any way we can go with a small fan solution like you mentioned before? Or should we take things one at a time?

Thanks for your patience!
 

alex_stief

Well-Known Member
May 31, 2016
884
312
63
38
I don't think a third top exhaust fan would be beneficial. It will just take the air from the topmost front intake fan, and exhaust it right away. Before it could pass over any components that need to be cooled.

For similar reason, I think you should only test with side panels on from this point on. Without them, some of the air from the intake fans exits the case before it reaches any of the hot components. And the exhaust fans draw some of their air from the surrounding, which is rather pointless. With side panels mounted, all the air that the fans are pushing has to go over the hot components.

On the topic of CPU VRM:
Your cooler setup is different than those of most people here using Noctua CPU coolers. The airflow with SP3 Noctua coolers goes bottom to top. Your coolers are blowing front to back. With Noctua coolers, the CPU VRM is sitting below the CPU heatsinks, in a spot with very little airflow. That's why mounting some small fans on it can help in this case.
With your coolers, the warm exhaust air from the first CPU cooler is blown over the CPU VRM heatsink. Normally, this should still provide enough cooling for the VRM.
So the goal with your cooler setup would be to mix in as much cool air as possible. You could start by mounting one of the spare 2000rpm Noctua fans on top of the CPU coolers. Parallel to the board, blowing towards the CPU VRM. You can use the screw holes on the CPU fans as mounting points for this. Remove the metal fan guards if they are in the way. Again, I recommend Zip-ties or whatever is the correct English term. Don't worry, the CPU sockets+coolers are very sturdy. Almost impossible to break anything, if you apply some common sense.

Memory VRMs, I will try to be as specific as I can:
1) lightly sand the contact surface of the copper heatsink
2) clean both the sanded surface, and the MOSFETs on the board with some isopropyl alcohol. E.g. with cotton swabs, or a kitchen towel wrapped around your finger.
3) Lay the PC flat on its side. For the first few hours, the thermal epoxy is not strong enough to hold the heatsinks in place.
3) Throw away the first squirt from the thermal epoxy tubes. To some extent, the paste will have separated while sitting on a shelf for months-years.
4) How to mix it correctly: without a very precise scale that can measure mg, your next best option is by length: onto some hard plastic from some packaging, squeeze out e.g. 1cm of the yellow stuff, and 4cm of the black stuff. I don't mean 1cm and 4cm measured from the syringes. You don't need that much, and probably want to save some for a second or third attempt. I mean the length of the bead that comes out of the syringes. If you are really far off from a 4:1 ratio, the epoxy will not cure properly. Either remaining semi-soft, or becoming too brittle with very low adhesion. You can check how it's doing with the rest of the mixture that you didn't use. Slightly sticky to the touch, just soft enough that you can dent it with your fingernails, but not gooey or even runny. Feel free to do a test-run beforehand, without applying it.
5) Mix it thoroughly with the included spatulas. And I mean really thoroughly for at least 2 minutes.
6) use the spatulas to apply some of it to the contact surface of the heatsinks. You don't need much, just enough to cover the surface. Not more than half a millimeter thick. Don't worry too much about the "correct" amount, this stuff is not electrically conductive.
7) gently press the heatsinks onto the MOSFETs. For the orientation of the fins I would recommend vertical. This helps natural convection. And as you can see in the image I posted a while back, I had to break off one or two fins to make enough clearance for one of the 8-pin power connectors.
8) Wait for at least 8 hours, maybe even 24 hours. Check adhesion by the state of the leftover mixture. And by GENTLY trying to wiggle the heatsinks. If they don't fall off, you are good to go. If a heatsink should come off (happened to me with 2 of them because I did not sand them for my first attempt), you can just sand off the remaining epoxy from the heatsink and try again. Don't worry about the epoxy that's left on the board, and do not try to get that off. Just glue on top of it.
 
Last edited:
  • Like
Reactions: gabe-a

gabe-a

New Member
Sep 10, 2020
26
6
3
Thanks @alex_stief for the great instructions. This helped a lot. I've installed them as follows, verifying that the cured remnants of the compound were indeed hard but could be indented with my fingernail.



I've installed all of them in the vertical position as suggested. I didn't need to break off any pieces (but I made sure to use my 2 fins with the most "bent-in" final fin where the power cords connected so that they could squeeze in).

Lightly touching them after ~12 hours showed they did not move around anymore on the motherboard.

Should I proceed to add (maybe via packing tape first?) the final fan on top of the 2 CPU coolers as-is, or should I add back the 4 Corsair RAM fans? In an ideal world, the mounted final fan would be able to cool everything from the top down and not require the additional RAM fans, but I could be mistaken since the airflow might not disperse all the way to every RAM stick.

Cheerio,
Gabe
 
  • Like
Reactions: alex_stief

alex_stief

Well-Known Member
May 31, 2016
884
312
63
38
Well, in an ideal world, case airflow should be enough to keep DDR4 memory modules from thermal throttling. I have never encountered anything like this. Then again, I usually don't get my hands on workstations with more than 512GB of RAM.
I'd say give it a go with a fast 140mm fan alone, it should be a quick test. Only closed side panels from now on ;)
 

gabe-a

New Member
Sep 10, 2020
26
6
3
Thanks! I ended up leaving the RAM fans in... maybe I will take them out. But right now I have the 2000RPM fan duct-taped to the CPU coolers blowing down. I don't think the duct tape will hold long (I've ordered zip ties from amazon), but for testing I figured why not.



Code:
CPU1 Temp                | 68.000     | degrees C  | ok    | 5.000     | 5.000     | 10.000    | 90.000    | 95.000    | 95.000
CPU2 Temp                | 76.000     | degrees C  | ok    | 5.000     | 5.000     | 10.000    | 90.000    | 95.000    | 95.000
System Temp              | 45.000     | degrees C  | ok    | 5.000     | 5.000     | 10.000    | 80.000    | 85.000    | 90.000
Peripheral Temp          | 38.000     | degrees C  | ok    | 5.000     | 5.000     | 10.000    | 80.000    | 85.000    | 90.000
MB_10G Temp              | 53.000     | degrees C  | ok    | 5.000     | 5.000     | 10.000    | 95.000    | 100.000   | 105.000
VRMCpu1 Temp             | 91.000     | degrees C  | ok    | 5.000     | 5.000     | 10.000    | 95.000    | 100.000   | 105.000
VRMCpu2 Temp             | 92.000     | degrees C  | ok    | 5.000     | 5.000     | 10.000    | 95.000    | 100.000   | 105.000
VRMSoc1 Temp             | 59.000     | degrees C  | ok    | 5.000     | 5.000     | 10.000    | 95.000    | 100.000   | 105.000
VRMSoc2 Temp             | 70.000     | degrees C  | ok    | 5.000     | 5.000     | 10.000    | 95.000    | 100.000   | 105.000
VRMP1ABCD Temp           | 60.000     | degrees C  | ok    | 5.000     | 5.000     | 10.000    | 95.000    | 100.000   | 105.000
VRMP1EFGH Temp           | 96.000     | degrees C  | nc    | 5.000     | 5.000     | 10.000    | 95.000    | 100.000   | 105.000
VRMP2ABCD Temp           | 82.000     | degrees C  | ok    | 5.000     | 5.000     | 10.000    | 95.000    | 100.000   | 105.000
VRMP2EFGH Temp           | 72.000     | degrees C  | ok    | 5.000     | 5.000     | 10.000    | 95.000    | 100.000   | 105.000
This is after 10 minutes of a run with case closed. We've managed to improve from 60 seconds to 5 minutes to now 10 minutes and counting (although it sounds like a jet engine and the fan taped on top of the CPU coolers makes a funny humming noise that reverberates throughout the house, haha).

I will also try with the RAM fans off -- maybe they're doing more harm than good?

By the way, it's almost always P1EFGH RAM VRM that acts up. That's the one right next to the incoming power cables, correct? What if the additional heat from the power intake there is making it tough to cool that VRM?

[edit] None of the parts hit critical temp, even though VRMP1EFGH hit 97C, which is "near critical" or "nc". This means it finished its run successfully for the first time (albeit with some throttling no doubt due to the "nc" temps of VRM P1-EFGH. But it didn't sound the alarm! However, the 140mm fan is already slipping so I had to remove it. I'll figure out something with zip ties and whatnot, but perhaps removal of the RAM fan at the top left (over the EFGH modules) may let the VRM there breath a bit better, as it seems airflow to the left of it is somewhat blocked by the neighboring RAM fan (top left).
 
Last edited:

alex_stief

Well-Known Member
May 31, 2016
884
312
63
38
Taking into consideration all the limiting factors for troubleshooting we have here, I have reached the point where I can not help you any further, without having the workstation in front of me.

Maybe it is worth taking a step back, and look at your method of testing stability.
I am still in the camp that a system should be able to handle any load you could possibly throw at it, especially if it was sold by a company with a significant markup. But for the sake of your sanity, and being able to actually use this workstation for something productive: maybe ask yourself if the workload you use for testing is representative for your actual workload. Maybe your stability testing is way more demanding than any real-world workload you would ever throw at it. In which case, trying to get it past your current stability tests would be unnecessary.
 

gabe-a

New Member
Sep 10, 2020
26
6
3
Thanks alex,

Great points, but I'm not actually stability testing at all. I'm running a real HPC workload that I want to get through. This is where the alarms went off. Multiple different workloads actually trigger things -- one is parallel genome short-read assembly using the ultradeep metagenomic sequencing of my stool sample, and the other is running a pipeline for long-read assembly. It also overheats in certain high-throughput clustering operations and other real-world number crunching like machine learning inference.

I got this machine for scientific workloads. I imagine consumer stress test apps (linpack etc) are just running some variant of the same underlying AVX instructions used in hardcore HPC.

Thanks once again for all the helpful info you've provided thus far. I'll try a bunch of things and see if I can still make this thing work. It is getting closer, to be fair. It could very well be the motherboard is just not meant for real computation. Or it's not built for gen 2 epyc with the up-to-240W per CPU power demand and 1 or 2TB of 2933MHz RAM (as the RAM VRMs are now the first to overheat and the CPU VRMs actually seem just barely okay now thanks to your suggestions).

By the way what PSU would you recommend for my system? It may have been installed with a poor/insufficient PSU, which I've recently read can cause VRMs to try to overcompensate when current gets drawn to the limit. I've also read that the alarm I've heard from this motherboard is actually PSU related as well (although I don't think it's a coincidence it went off at the same time as a VRM overheat).
 

alex_stief

Well-Known Member
May 31, 2016
884
312
63
38
Someone else already commented on that here: the board is able to handle this kind of hardware. Problems start when people like us try to use it in workstations, due to lack of alternatives. With server-grade airflow over the board (i.e. a wall of 80mm or 40mm screamers in a 2U or 1U form factor respectively), overheating would not be an issue.
The recently released Gigabyte MZ72-HB0 appears to have overall beefier power delivery and cooling compared to Supermicros H11DSi.

For the type of hardware you have there, I would probably use a quality power supply not below 1200W. Bequiet just released their new flagship model "dark power pro 12". For me personally, that's a bit on the expensive side. I would rather go with a Seasonic Prime Platinum 1300W, or a dark power pro 11. Of course, use higher wattage if you plan on using more GPUs in the future.
Which power supply are you using now?
 
  • Like
Reactions: gabe-a

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,625
2,043
113
Did you try without the fans on the motherboard? Without the fan on top of the CPU HSF?
You want air in and air out you don't want it swirling around like that.


Personally, I don't think you're going to get it cool enough with 3x120 desktop\style fans. Even the most basic (enterprise\real) rack mount servers move more air with smaller fans than these, more pressure, more movement over where it's needed.

We can go around in circles too on fan configurations but if you're running this in a 75* room that's a bit different than a 65* room etc...


There's no way you're going to run that quietly at the level your trying to utilize it at so going to a 4U server chassis seems like the best choice to me.
 
  • Like
Reactions: Dreece

gb00s

Well-Known Member
Jul 25, 2018
1,177
587
113
Poland
I’m sorry guys but, in my humble opinion:

1. We do not know ambient temps ... This has a huge effect on any thermals if the delta between the cooling air and the heat sources reaches the 'point-of-no-return'. if the delta is too small you will never be able to cool it down again properly other than stopping any workload or shutting it off.
2. The case with this load of periphery inside is 'useless' in terms of providing airflow ... Simple said it's just too small to allow any flow
3. Did you consider the installed periphery giving up heat into the case? I mean there are tons of added NVME's, HBA, graphic card etc ... All producing lots of thermal energy
:
:
7. Not to mention the direction of the airflow. Front to back & top is in my opinion the wrong way. The front fans are cooling the HDD's. Nothing else. The air around the HDD cages becomes already heated up. The first CPU fan takes the air directly from the heated HDD. The second CPU cooler sucks air from the super-hot controller between both CPUs.

You can have tons of arguments against it why this doesn't work.

Last but not least: As I wrote above. The lower the delta in temps between periphery and air temp in the case, the worse it gets and the more difficult to prevent the point-of-no-return. Like an overheating engine. The less speed in airflow, the faster the air in the case heads up. You will never escape this circle if:

1. You don't use a bigger case allowing flowing air
2. If you do not cool down your ambient temp
3. If you place periphery in the direction of airflow and therefore slow down airflow
4. If you use way too much fans interfering with each other in a counter-productive way
:
:
etc etc

This can go further and further. Start from new and totally rethink the environment.

Regards

Mike
 

alex_stief

Well-Known Member
May 31, 2016
884
312
63
38
The air around the HDD cages becomes already heated up
Not sure what type of HDD you usually work with, but the HDDs used here do not have any significant impact on the temperature of the air passing over them.
The second CPU cooler sucks air from the super-hot controller between both CPUs
Same here. While the CPU VRM heatsink may be hot, it does not give off a huge amount of heat. And the CPU is not actually overheating anyway.
 

gb00s

Well-Known Member
Jul 25, 2018
1,177
587
113
Poland
I think you are underestimating the sum of issues ...2C too high temps here, 5c more temps there and sumsumsum you have +10C higher temps in the case which has an impact on the curve exponentially.

Not sure what type of HDD you usually work with, but the HDDs used here do not have any significant impact on the temperature of the air passing over them.
The cooler from CPU0 almost kisses the HDD's. Maybe the HDDs have internal cooling and the thermal energy is passed somewhere there. Maybe, maybe not. The thermal energy has to go somewhere ...

While the CPU VRM heatsink may be hot, it does not give off a huge amount of heat
That would be very concerning to me if a heat source doesn't give up its energy ... LoL.

Ok, I already regret that I even posted some of my thoughts. I stand by my opinion the whole environment is wrong with this periphery and you can dig in the mud as long as you want. Good luck for the 'TO'
 

gabe-a

New Member
Sep 10, 2020
26
6
3
Thanks for your thoughts, Mike -- very appreciated.

I can hopefully answer some of your questions.
1. We do not know ambient temps ... This has a huge effect on any thermals if the delta between the cooling air and the heat sources reaches the 'point-of-no-return'. if the delta is too small you will never be able to cool it down again properly other than stopping any workload or shutting it off.
Ambient temps: the room air is kept at 68F (20C). The air inside the case has a "case temp" that is also referenced in some of the IPMI logs as well, and it sometimes approaches 37C, but does not seem to go above this. We don't know where the actual point of measurement is on the motherboard/case for this, though, just what the logs say, unfortunately.
2. The case with this load of periphery inside is 'useless' in terms of providing airflow ... Simple said it's just too small to allow any flow
I'm not sure the case is the only factor here, as I have a Knights Landing and dual Xeon platinum 8180 from HP with identical or smaller cases. It's just that the HP case has a bunch of strange cooling inside, and the Knights Landing is watercooled. Neither of those systems overheat on the same workloads. A friend of mine also built into an ATX mid case with a couple of older Xeons and it (apparently) doesn't overheat.
3. Did you consider the installed periphery giving up heat into the case? I mean there are tons of added NVME's, HBA, graphic card etc ... All producing lots of thermal energy
No, I didn't -- and this is a good point. There is a lot of...stuff in here. Unlike the HP, which has a bunch of modular compartments with dedicated fans whooshing across each separately, everything is sort of mixed together in here like in a desktop PC. During my workloads, though, the GPU is never running load while the CPUs are, and the I/O is usually staggered around the computation, so it's not terribly likely the components would all be under load at once. Of course RAM is an exception, as the CPUs and RAM are often under very high load together.
7. Not to mention the direction of the airflow. Front to back & top is in my opinion the wrong way. The front fans are cooling the HDD's. Nothing else. The air around the HDD cages becomes already heated up. The first CPU fan takes the air directly from the heated HDD. The second CPU cooler sucks air from the super-hot controller between both CPUs.
This is interesting. Putting my hand in the case shows that the air coming past the hard drives is relatively cool still (of course, this is unscientific). But I do know the hard drives turn off when not in use (and I don't pull data from the RAID HDDs while doing computation, so they are confirmed to be powered down in an idle state while the CPUs are doing their work (which caches in RAM and occasionally checkpoints onto the NVME PCIe card). Changing the airflow direction has noticeably improved temperatures (a workload that failed in 60 seconds now runs to completion, albeit with some throttling at the 10 minute point as the RAM VRM cools off).

The lower the delta in temps between periphery and air temp in the case, the worse it gets ... You will never escape this circle if:

1. You don't use a bigger case allowing flowing air
The flux of air out of the exhaust fans is now quite high, where it was low before. A piece of paper dropped at case height is blown ~ 4 meters now. The same piece of paper was not blown 0.5 meters previously. I don't think case size alone is the greatest factor considering my functioning HP Z8 and Colfax Knights Landing rigs. But I'm sure you're right that it plays a role. This case is huge, though. :)
2. If you do not cool down your ambient temp
This is interesting -- how to do this? Ambient as in "within-case" or "outside of case"? Room temp A/C ensures 68F/20C consistently. A thermometer placed in front of the intake fans reports 68F during load.
3. If you place periphery in the direction of airflow and therefore slow down airflow
This makes sense. I do have this problem. I don't know how to fix it. I do know air is rushing past, but there are indeed a bunch of HDDs (and even the front of the case, which redirects inflow from the sides of the front panel!). There are also a bunch of RAM fans in the way, and a bunch of other pieces throughout the case.
4. If you use way too much fans interfering with each other in a counter-productive way
This is a good point, but not sure how to ensure air flow without more fans.
etc etc
This can go further and further. Start from new and totally rethink the environment.
Actually would really appreciate elaborating on the "etc etc" because all of this is helpful. I did not build this computer and have no idea what to rethink. Can you provide some pointers for a rebuild? (I might not be able to do it myself but it would help the company I bought it from to rebuild it, as this is now the third reincarnation of the computer).
 

gabe-a

New Member
Sep 10, 2020
26
6
3
Did you try without the fans on the motherboard? Without the fan on top of the CPU HSF?
You want air in and air out you don't want it swirling around like that.
I've tried without the top-mounted fan, but not yet without the extra fans on the motherboard. I'll give it a try.

Personally, I don't think you're going to get it cool enough with 3x120 desktop\style fans. Even the most basic (enterprise\real) rack mount servers move more air with smaller fans than these, more pressure, more movement over where it's needed.
Makes sense. I do have an incredible HP Z8 workstation (factory configured/setup/installed) and it is absolutely incredible (but a little loud under most intense load, nearly silent otherwise), but it uses a pretty strange cooling system with the various components separated into their own metal "rooms" and fans that blow through each. I also have a Knights Landing workstation as well, but it's watercooled and the RAM sticks have pretty crazy fins. So nothing I can glean from either tower that will help my case here.
We can go around in circles too on fan configurations but if you're running this in a 75* room that's a bit different than a 65* room etc...
I like the pun. ;) I keep the room at 68C.
There's no way you're going to run that quietly at the level your trying to utilize it at so going to a 4U server chassis seems like the best choice to me.
Any experience with the Rosewill Rackmount RSV-R4000 on amazon? Or should I get a different number of "U" (smaller "U" = thinner, I'm seeing). Does it require specialized expertise to set up? I think a new case will be the last resort for me -- just need to get that one final RAM VRM's temps down.
 

Jaket

Active Member
Jan 4, 2017
232
119
43
Seattle, New York
purevoltage.com
Sorry to see all of the issues you have had for this, we have had a few clients get systems build from some known server builder and had issues after issues. We built them a 30k system with 8x GPU's and had zero issues, the best option if possible is a server case built to keep things cooled. Then again this helps due to the air in a data center as well. Not sure how the air is at your place.

Ideally this would have been a water cooled system if not in a server case.
Hopefully you can get it running 100%!
 
  • Like
Reactions: gabe-a

gabe-a

New Member
Sep 10, 2020
26
6
3
do you have the option of using a real server chassis barebones system?
(pic of R282-Z93)
I guess it depends -- this requires some kind of rack, correct? Does it allow fan speed control? The reason I avoided just buying a legit server in the first place is because of the noise.

I have a dual xeon platinum 8180 workstation (hp) that is unbelievably silent except under tremendous load, and a 272-thread Xeon phi watercooled workstation as well from Colfax that is also nearly silent. That's why I thought I could have some random company build me a workstation dual Epyc (similar TDP etc). But it just doesn't work properly under the same workloads. Despite roaring so loudly, it underperforms the dual xeon despite ample evidence it should be outperforming it ~3x.

I'm becoming convinced real companies use some kind of "magic" to make quiet, high-performing workstations out of these high-end chips.
 

Jaket

Active Member
Jan 4, 2017
232
119
43
Seattle, New York
purevoltage.com
I guess it depends -- this requires some kind of rack, correct? Does it allow fan speed control? The reason I avoided just buying a legit server in the first place is because of the noise.

I have a dual xeon platinum 8180 workstation (hp) that is unbelievably silent except under tremendous load, and a 272-thread Xeon phi watercooled workstation as well from Colfax that is also nearly silent. That's why I thought I could have some random company build me a workstation dual Epyc (similar TDP etc). But it just doesn't work properly under the same workloads. Despite roaring so loudly, it underperforms the dual xeon despite ample evidence it should be outperforming it ~3x.

I'm becoming convinced real companies use some kind of "magic" to make quiet, high-performing workstations out of these high-end chips.
Finding a work station with these types of setups are a lot more rare than a typical server style system.
That being said you are 100% correct regarding the noise level of the server itself. They will often times either have smaller fans which spin up like crazy, some of the larger 4U cases will make less noise using larger fans which don't need to spin quite as fast. However either way a server case is going to be louder than you would want in an office or house.

Servers will cool easier because they suck in the cold air from one side of the rack and push the hot air out the other side.

For your system I believe either finding someone who could water cool the setup for you would be great. The other option is looking at the airflow and seeing how you can improve this. Ideally you want air from one side sucking in the cold air and the other side pushing the hot air out.

One of my friends / staff member built out a really nice water cooled 64 core ryzen threadripper with a pair of 2080TI cards and it runs beautifully even under 100% load. However, that is only a single CPU which puts out a lot of heat.

You can run a rack mount system on a desk or floor like anything else, however they're designed more for a rack space and are loud.
 

jpmomo

Active Member
Aug 12, 2018
531
192
43
this issue seems to revolve around the fact that the amd epyc motherboards are mostly server based MBs. They are designed for front to back cooling with high static pressure fans (usually with some form of air shroud.) the intel xeons have been around much longer and there are several MBs designed for workstation use. In one of my systems I am using a server mb with 2 naples cpus as a workstation. it is a kind of a Frankenstein build with a couple of ekwb phoenix 360 aio watercoolers on the cpus and an amd frontier watercooled gpu. I put that in a massive case (could probably fit 2 of these in the case!) it is a thermaltake level 20 xt. i would still recommend a case designed for a server. i also have an hpdl380 gen 10 with a pair of intel 8160s with 6 dual port nics that run at 100% cpu due to some dpdk sw. that server has no issues with heat even under those extreme conditions. That server will get loud when the fans go through their boot up test but "calms down" after it has finished booting up. It will not be as quiet as a case designed to be a workstation but would alleviate your overheating issues. somewhat of a mix between the workstation case and a server case is the following from inwin:

1602002831885.png