MZ73-LM2 Rev 3, Dual Epyc 9684X, Dual 5090 RTX, relatively small footprint

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Venturi

Active Member
Apr 22, 2016
212
217
43
GLAD to discuss the bios and hurdles/triumphs of dealing with the MZ73-LM2

So, I've had success on these boards, but went through a significant learning curve.
While I am in the C suite at work, I do play an active role in the IT side. Do a lot of ML / NLP / LLM / and other projects mostly related to imaging and also to isotopes.

Some of the servers and workstations at work are byproducts of my home PC.

Mt most recent home PC is based on MZ73-LM2 Rev 3.0, up from a prior MZ73-LMO, up from a prior MZ72-HB0, up from a prior Asus Sage dual Xeon.

Along the way I've learned all sorts of stuff to optimize the bios, work arounds, and how to overcome some issues.

So many bios learning moments... and I am glad to help other travelers who have hit the roadblocks from these boards. I also would like to give a special thanks to members of this forum who professionally and politely have catered to things I didn't know and provided details, time and tutelage, thank you!


So this home PC/server when I take work home with me(imaging, isotopes, ML, NLP, LLM, Chat RTX, but also used for email, surfing web, and playing games.

The home PC:

components:

2x 5090 RTX Gigabyte OC Edition

2x AMD 9684X Epyc, Gigabyte MZ73-LM2 Dual socket motherboard, PCIE 5.0

1.5 TB RAM DDR5 ECC LRDIMMs , 2800W Super Flower psu

4x Micron 9300 Max (RAID / 60TB volume) Samsung 9100 Pro 8TB (8TB). 2x Back up drives Micron 5300 (16TB)

Asus PA32UCG-K monitor, MS Data Center 2022 & 2025 & Ubuntu, Air cooled, cool temps, small case, quiet operation



IMG_9922.JPG


I still had to upgrade the heatsinks on the board, although (even though I replaced it with my own design) the newer board came with a copper heartsink for the LAN chip.

I also changed the m.2 nvmes from the 8tb Sabrent (gen 4) to the Samsung 8TB 9100pro nvmes for my C drive, to benefit from the GEN 5.

I also update the CPU heatsinks with some improvements I made.

Other changes beyond cable management and the recent inclusion of the 2800W Superflower PSU, include the move to gigabyte 5090s OC up from the stock nvidia Founders Edition 5090s. I was not exceptionally impressed with the cooling from nvidia, especially when the fans were not going, the nvidia FEs crept up to 50 C before the fans would kick on when doing idle tasks (non app and no gaming).

My ongoing issue is that due to the dual blower passthrough design of the nvidia cards, the hot air from the 1st GPU bathes the 2nd GPU:

IMG_9890.JPG

The gigabyte 5090s have a more generous cooler and stayed below 39 C at idle with the fans off (normal behavior when not using an app or gaming.

The challenge was the sheer size of the Gigabyte 5090s OC…huge challenge as of the HUGE cards compared to the nvidia FEs.

IMG_9882.JPG

IMG_9884.JPG

I also had to be mindful of the 2800W Super Flower Power Supply

IMG_9166.JPG

So I had to come up with a better design. I did not want the hot air from the cards blowing onto the dual CPUs and ram and I did not want the video cards to blow hot air down to the motherboard.

So this required them to be upside down to the warm air is blown away from the other components. ~ why is the standard to blow hot air into CPUs, ram and to the mother board?

So this required more vigorous bracketing:

IMG_9889.jpg

IMG_9885.JPG


The final product runs silent (in most cases, unless a heavy gaming session, but even then its not loud)
Under load the gigabyte 5090 OC cards peak at 68-70C, which was lower than the 5090 Founders Edition, but more importantly, when the video card fans are not going, the cards stayed around 39C in a room that is normally about 71F.
yielding:

IMG_9919.JPG

The 2800W power supply are on the other side along with the cooled (dual quiet 92mm fans) 4 drive array


bIMG_9914.jpg

IMG_9168.JPG
IMG_9928.JPG

IMG_9912.JPG

I used other heatsinks for the board and the lan chipset, the lan chipset heatsink is my own solid copper active design, more on that later
b8e3e98337281ccbecdb7e88d280a9054dc8f3b6.jpeg

3aef2804ddfa5ef80ebcba05ff5a36297d4e6dc5.jpeg

IMG_9926.JPG

So from one volume to the next, read and write performance increased. Here is an example of backing up the C drive volume to the D drive array.


IMG_9978.JPG

Obviously, the system is about mGPU
so for benchmark:


grav.jpg
 
Last edited:

custom90gt

Active Member
Nov 17, 2016
338
131
43
41
Wow is about all I could come up with, great build. Not only ultra high end components, but lots of love went into this build. Not something you usually see.
 

Venturi

Active Member
Apr 22, 2016
212
217
43
Challenges,
there was a performance hit on the MZ73-LM0 Rev 3 when using any bios after R04-F32, reverting to the R04-F32 restored performance and stability

Same with the MZ73-LM2 Rev 3, there was a performance and stability hit after R12-F37, reverting to the R12-F37 restored performance and stability

Performance penalty -7% +/- 2%

Suggestion 1, use R12-F37 unless there is a specific feature you need in newer bioses that you can't live without.

Suggestion 2, and milage may vary, use these settings, for discussion, I realize in NUMA many will favor NPS4, but NPS1 works best for me. AUTO simply means NPS4.

Also I set mine up, the xGMI and the bandwidth for my use case. Per men’s toy committing the lanes, bandwidth, and speed

Settings of consequence:
Against logic, pcie 1 and 2 go to CPU1, while pcie 3 and 4 go to CPU0, so the GPUs are on pcie 3 and 4, this solved many headaches.
I also disabled pcie 1 and 2

in running 2 GPUs, this is where I had a learning curve on this board

Set lanes to 16x, NOT AUTO
IMG_0068.JPG
enable resizable bar:
IMG_0069.JPG

in the SMU:
so much trial and error, it ended up here:
IMG_0070.JPG

the rest:

IMG_0074.JPG

IMG_0071.JPG

IMG_0072.JPG

disable SMEE and SEV control

IMG_0075.JPG

IMG_0076.JPG

IMG_0073.JPG

Milage may vary, this is for my use case.

PS
If you know precisely what GEN your video cards are, then lock them in for the bios to avoid issues:

IMG_0138.jpg
 
Last edited:

Venturi

Active Member
Apr 22, 2016
212
217
43
Cool temps for the components, yes. But how about the room? I can imagine this kicks out some noticeable heat!
Actually, room stays with little effort around 70-71F

The PC does produce heat under load, but it doesn’t seem to affect the room, then again that room is about 950 sqft, ceiling is about 22ft, so by volume of air, the pc does not have a significant impact
 

Venturi

Active Member
Apr 22, 2016
212
217
43
Great build! What CPU coolers you use ?
Scoured the internet, but on the normal channels, newegg, ebay, amazon, was having a tough tine, then came across the popular Arctic colera for SP5 which had the other orientation bit were mediocre performers and had unreliable and noisy built in fans

finally different choices from china showed up on the internet and ebay, alibaba, ordered 6 pair of slightly different builds, 2 pair were the better of the bunch, sent the rest back.

out of the two pair i had to fix many flaws including bending the heat-pipes to make the units straight, repasting all the fins, re assembling, pressing, and finally lapping the contact surface to smooth copper

i went back to but some soares but that version is gone

what made that version unique was that i was able to get all the gibs to fully cover 120x120 for the fan area without compromise

then, being fan snob, ended up with the Phantecks T30

ill look around the bet and see if the version I ended up with for SP5 is back,
 

terryww

Member
Aug 19, 2025
42
2
8
I always thought DDR5 RAM needs (passive) cooling in server motherboards. Interesting to see that's not the case.
 
  • Like
Reactions: Whaaat

Venturi

Active Member
Apr 22, 2016
212
217
43
I always thought DDR5 RAM needs (passive) cooling in server motherboards. Interesting to see that's not the case.
I guess it depends on availability of airflow and surrounding air, as this is a slightly "open" design, hopefully that mitigates it, under load the RAM temps can get as high as 56-57C, but that is only in specific artificially created scenarios with some heavy duty CCN/ML

most of the time at idle they are 39-41C
 

DanRR

Member
Feb 4, 2024
84
6
8
Hi Venturi,
Question about power supply. I just bought it and am totally confused. First its manual shows strange drawings of its cable pin layouts with 3 rows of contacts (!!!). Second it has these new four 12+4 pin cables with the mark on them "600W" but it is not clear anywhere in the text if all or most of the 2800W power of this PSU is supplied via these 12+4pin cables or it is shared with regular 8pin cables. We do not have these new 12+4 pin connectors on this motherboard. So the question is if we will not use these new cables -- will all the power still go into regular 12V cables ?

Please clarify. I urgently need this! Thanks
 
Last edited:

Venturi

Active Member
Apr 22, 2016
212
217
43
Hi Venturi,
Question about power supply. I just bought it and am totally confused. First its manual shows strange drawings of its cable pin layouts with 3 rows of contacts (!!!). Second it has these new four 12+4 pin cables with the mark on them "600W" but it is not clear anywhere in the text if all or most of the 2800W power of this PSU is supplied via these 12+4pin cables or it is shared with regular 8pin cables. We do not have these new 12+4 pin connectors on this motherboard. So the question is if we will not use these new cables -- will all the power still go into regular 12V cables ?

Please clarify. I urgently need this! Thanks
So here to help, relax.
(hands over the whiskey bottle for swig)

What do you mean by pictures of "strange drawings of its cable pin layouts with 3 rows of contacts"?

this is the back of the PSU and I don't see what you are referring to
IMG_9180.JPG

unless those are the 3 rows you are referring to?

the 600 watt powers leads are just for GPUs, it says 600w each, but I'm sure it can provide more than a ceiling of 600 as the gigabyte OC 32G 5090s pull 600w to start with, before any overclocking.

the total power out (constant) is rated at 2800w, but reviews on the internet by experts show it it can peak 3300+
the 12pin 600w is part of the shared load. I am not sure if it is configured single rail or multi-rail groups.

the 600 watt powers leads are just for GPUs, it says 600w each, but I'm sure it can provide more than a ceiling of 600 as the gigabyte OC 32G 5090s pull 600w to start with, before any overclocking.

The 12 pin cables are ONLY for GPUs
the other cables (24 pin, 8 pin, and 6 pin) go to the motherboard.

Example on the MZ73-LM2 there are:
1x 24 pin
2x 8 pin
3x 6 pin

board.jpg

I hope that helps
(grabs whiskey bottle back)
 
Last edited:
  • Like
Reactions: terryww

DanRR

Member
Feb 4, 2024
84
6
8
Thanks for respond, just don't drink and drive the rest is OK!
So do you think that despite I will not use these 12+4 pin cables and use only regular VGA 12V cables I will be able to get their claimed 2800W ?

The 3 rows pinouts are in their supplied with PSU manual . Isn't it strange? Do you have same manual which looks like counterfeit? SuperFlower warns itself on our websites - be afraid to buy from not authorized sellers

Screenshot from 2025-10-16 06-56-03.png
 

Venturi

Active Member
Apr 22, 2016
212
217
43
I can't really say

There is NO "Eco Switch" on my PSUs ( I bought 2) however, that could be a generic reference as it says for 650w PSU

Does your PSU back match this picture:
IMG_9180.JPG

When I get home from work, Ill have to go look for my boxes to find the manual so I don't know what looks counterfeit.
Honestly, I never looked at the manual, shame on me


psucolor.jpg

and YES:
you get the full 2800w from the other connectors on the board even if you don't use the GPU connectors,
I believe there is several layers of circuit protection so that no, you would not be able to pull 2800w from just ONE single 8 pin socket ;)

from Super Flower's site on the 2800w:

" Equipped with HCS terminals and a native 12V-2x6 (12VHPWR) connector, it supports up to 200% power excursion, meeting the demands of next-gen GPUs while reducing cable bending stress."

p4.jpg

Full disclosure:
..this is the best I can do while in a meeting and using my cell phone to do these replies
 
Last edited:
  • Like
Reactions: terryww

DanRR

Member
Feb 4, 2024
84
6
8
Great, many thanks for confirmation that I will still get the power as Turins draw it like no tomorrow.

Hope is that rails that supply these new 12x4pin outlets also are connected in parallel to supply regular 8pin 12V outlets.

And yes, my PSU looks the same like yours.
No ECO buttons too


My Corsair PSU HX1500i does not sustain even 100% excursion, it crashes the computer when the power reach the nominal
 
Last edited:

DanRR

Member
Feb 4, 2024
84
6
8
Also, would be nice to know just in case how did you connect the cables to motherboard, which port number on PSU was going what place in the motherboard. Neither PSU or mobo manufacturers clarify all these important details about their products. I am still guessing which port supplying the major part of power to the processors. Regular desktop processors do not need much power, GPU grabs most of it. But in our case of power hungry Turins, the processors grab most of power, so we probably should feed them from the GPU ports of PSU...But processor and GPU cables are little bit different....total confusion. Also on this damn PSU there is no indication which 8pin ports are for CPU and which for 8pin GPU

And that's not all. The warning in the manual tells not to use any power cable besides original. But how this original cable can be used in the USA then with its 120V, do they read what they write ?

Additional confusion adds the Gigabyte itself. They claim that consumer PSU should not be used with the server motherboards while what exactly PSUs they recommend, what cables they do not tell
 
Last edited:

DanRR

Member
Feb 4, 2024
84
6
8
this is a +12V single Rail PSU. a short circuit at one end of a 8pin gives just fire, does not trigger OCP.
Single rail probably is a good news, isn't it? I did not find this info anywhere. Short circuit has very low probability
 

Venturi

Active Member
Apr 22, 2016
212
217
43
Also, would be nice to know just in case how did you connect the cables to motherboard, which port number on PSU was going what place in the motherboard. Neither PSU or mobo manufacturers clarify all these important details about their products. I am still guessing which port supplying the major part of power to the processors. Regular desktop processors do not need much power, GPU grabs most of it. But in our case of power hungry Turins processor grabs most of power so we should feed them from the GPU ports of PSU...But processor and GPU cables are little bit different....total confusion

Additional confusion add Gigabyte itself. They claim that consumer PSU should not be used with the server motherboards while what exactly PSUs they recommend, what cables they do not tell

I did this:
IMG_9180 (1).JPG

board.jpg
 
  • Like
Reactions: terryww