GLAD to discuss the bios and hurdles/triumphs of dealing with the MZ73-LM2
So, I've had success on these boards, but went through a significant learning curve.
While I am in the C suite at work, I do play an active role in the IT side. Do a lot of ML / NLP / LLM / and other projects mostly related to imaging and also to isotopes.
Some of the servers and workstations at work are byproducts of my home PC.
Mt most recent home PC is based on MZ73-LM2 Rev 3.0, up from a prior MZ73-LMO, up from a prior MZ72-HB0, up from a prior Asus Sage dual Xeon.
Along the way I've learned all sorts of stuff to optimize the bios, work arounds, and how to overcome some issues.
So many bios learning moments... and I am glad to help other travelers who have hit the roadblocks from these boards. I also would like to give a special thanks to members of this forum who professionally and politely have catered to things I didn't know and provided details, time and tutelage, thank you!
So this home PC/server when I take work home with me(imaging, isotopes, ML, NLP, LLM, Chat RTX, but also used for email, surfing web, and playing games.
The home PC:
components:
2x 5090 RTX Gigabyte OC Edition
2x AMD 9684X Epyc, Gigabyte MZ73-LM2 Dual socket motherboard, PCIE 5.0
1.5 TB RAM DDR5 ECC LRDIMMs , 2800W Super Flower psu
4x Micron 9300 Max (RAID / 60TB volume) Samsung 9100 Pro 8TB (8TB). 2x Back up drives Micron 5300 (16TB)
Asus PA32UCG-K monitor, MS Data Center 2022 & 2025 & Ubuntu, Air cooled, cool temps, small case, quiet operation

I still had to upgrade the heatsinks on the board, although (even though I replaced it with my own design) the newer board came with a copper heartsink for the LAN chip.
I also changed the m.2 nvmes from the 8tb Sabrent (gen 4) to the Samsung 8TB 9100pro nvmes for my C drive, to benefit from the GEN 5.
I also update the CPU heatsinks with some improvements I made.
Other changes beyond cable management and the recent inclusion of the 2800W Superflower PSU, include the move to gigabyte 5090s OC up from the stock nvidia Founders Edition 5090s. I was not exceptionally impressed with the cooling from nvidia, especially when the fans were not going, the nvidia FEs crept up to 50 C before the fans would kick on when doing idle tasks (non app and no gaming).
My ongoing issue is that due to the dual blower passthrough design of the nvidia cards, the hot air from the 1st GPU bathes the 2nd GPU:

The gigabyte 5090s have a more generous cooler and stayed below 39 C at idle with the fans off (normal behavior when not using an app or gaming.
The challenge was the sheer size of the Gigabyte 5090s OC…huge challenge as of the HUGE cards compared to the nvidia FEs.


I also had to be mindful of the 2800W Super Flower Power Supply

So I had to come up with a better design. I did not want the hot air from the cards blowing onto the dual CPUs and ram and I did not want the video cards to blow hot air down to the motherboard.
So this required them to be upside down to the warm air is blown away from the other components. ~ why is the standard to blow hot air into CPUs, ram and to the mother board?
So this required more vigorous bracketing:


The final product runs silent (in most cases, unless a heavy gaming session, but even then its not loud)
Under load the gigabyte 5090 OC cards peak at 68-70C, which was lower than the 5090 Founders Edition, but more importantly, when the video card fans are not going, the cards stayed around 39C in a room that is normally about 71F.
yielding:

The 2800W power supply are on the other side along with the cooled (dual quiet 92mm fans) 4 drive array




I used other heatsinks for the board and the lan chipset, the lan chipset heatsink is my own solid copper active design, more on that later



So from one volume to the next, read and write performance increased. Here is an example of backing up the C drive volume to the D drive array.

Obviously, the system is about mGPU
so for benchmark:

So, I've had success on these boards, but went through a significant learning curve.
While I am in the C suite at work, I do play an active role in the IT side. Do a lot of ML / NLP / LLM / and other projects mostly related to imaging and also to isotopes.
Some of the servers and workstations at work are byproducts of my home PC.
Mt most recent home PC is based on MZ73-LM2 Rev 3.0, up from a prior MZ73-LMO, up from a prior MZ72-HB0, up from a prior Asus Sage dual Xeon.
Along the way I've learned all sorts of stuff to optimize the bios, work arounds, and how to overcome some issues.
So many bios learning moments... and I am glad to help other travelers who have hit the roadblocks from these boards. I also would like to give a special thanks to members of this forum who professionally and politely have catered to things I didn't know and provided details, time and tutelage, thank you!
So this home PC/server when I take work home with me(imaging, isotopes, ML, NLP, LLM, Chat RTX, but also used for email, surfing web, and playing games.
The home PC:
components:
2x 5090 RTX Gigabyte OC Edition
2x AMD 9684X Epyc, Gigabyte MZ73-LM2 Dual socket motherboard, PCIE 5.0
1.5 TB RAM DDR5 ECC LRDIMMs , 2800W Super Flower psu
4x Micron 9300 Max (RAID / 60TB volume) Samsung 9100 Pro 8TB (8TB). 2x Back up drives Micron 5300 (16TB)
Asus PA32UCG-K monitor, MS Data Center 2022 & 2025 & Ubuntu, Air cooled, cool temps, small case, quiet operation

I still had to upgrade the heatsinks on the board, although (even though I replaced it with my own design) the newer board came with a copper heartsink for the LAN chip.
I also changed the m.2 nvmes from the 8tb Sabrent (gen 4) to the Samsung 8TB 9100pro nvmes for my C drive, to benefit from the GEN 5.
I also update the CPU heatsinks with some improvements I made.
Other changes beyond cable management and the recent inclusion of the 2800W Superflower PSU, include the move to gigabyte 5090s OC up from the stock nvidia Founders Edition 5090s. I was not exceptionally impressed with the cooling from nvidia, especially when the fans were not going, the nvidia FEs crept up to 50 C before the fans would kick on when doing idle tasks (non app and no gaming).
My ongoing issue is that due to the dual blower passthrough design of the nvidia cards, the hot air from the 1st GPU bathes the 2nd GPU:

The gigabyte 5090s have a more generous cooler and stayed below 39 C at idle with the fans off (normal behavior when not using an app or gaming.
The challenge was the sheer size of the Gigabyte 5090s OC…huge challenge as of the HUGE cards compared to the nvidia FEs.


I also had to be mindful of the 2800W Super Flower Power Supply

So I had to come up with a better design. I did not want the hot air from the cards blowing onto the dual CPUs and ram and I did not want the video cards to blow hot air down to the motherboard.
So this required them to be upside down to the warm air is blown away from the other components. ~ why is the standard to blow hot air into CPUs, ram and to the mother board?
So this required more vigorous bracketing:


The final product runs silent (in most cases, unless a heavy gaming session, but even then its not loud)
Under load the gigabyte 5090 OC cards peak at 68-70C, which was lower than the 5090 Founders Edition, but more importantly, when the video card fans are not going, the cards stayed around 39C in a room that is normally about 71F.
yielding:

The 2800W power supply are on the other side along with the cooled (dual quiet 92mm fans) 4 drive array




I used other heatsinks for the board and the lan chipset, the lan chipset heatsink is my own solid copper active design, more on that later



So from one volume to the next, read and write performance increased. Here is an example of backing up the C drive volume to the D drive array.

Obviously, the system is about mGPU
so for benchmark:

Last edited:


















