Gigabyte MZ31-AR0 memory configuration

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Frank173

Member
Feb 14, 2018
75
9
8
I like to ask a question about memory configuration for the Gigabyte MZ31-AR0 motherboard:

AMD EPYC 7000 | Server Motherboard - GIGABYTE B2B Service

As can be seen the layout of the PCIe slots is rather poorly designed, I like that it is equipped with plenty PCIe slots but any full length PCIe cards make it impossible to utilize memory banks E0 until H1. (Please see manual MZ31-AR0 (rev. 1.x) | Server Motherboard - GIGABYTE B2B Service).

My question is this: If I am content with using 128gb memory and want to use all 8 memory channels on the board can I place ram modules just into banks A0 until D1 (=8 slots)? It does not appear clear to me from the manual how to configure memory on this board. I am thinking of using 8 16gb memory modules and like to know whether I get 8-channel support if I only populate slots A0 -> D1 (please see the manual linked above if unclear about the memory slot encoding).

Thanks a thousand
Matt
 

alex_stief

Well-Known Member
May 31, 2016
884
312
63
38
No, you can't. You need to populate DIMM-slots with the same color/letter first in order to use all memory channels.
And to quote the manual:
DIMM must be populated in sequential alphabetic order, starting with bank A.
When only one DIMM is used, it must be populated in memory slot A1.
So for only 8 DIMMs, you need to populate A1, B1, C1,...
 

Frank173

Member
Feb 14, 2018
75
9
8
Where does it state colors? I see letters, starting from A0, A1, B0, B1, ....and where does it state that I have to populate A1, B1, C1...first? The manual as far as I can see only states that when you have only one module that you need to place it into the A1 slot. I am not saying you are wrong but would you be so kind as to link to a reference where you gained such knowledge?

If that was true then how does this board make in the remotest sense at all? The only reason someone would buy a single CPU Epyc board is for the 128 PCIe lanes. Else a much higher clocked x1950 Threadripper or otherwise many more choices based on Xeons seem way more attractive. On this board Gigabyte basically allows 2 full length x16 PCIe cards to be used which entirely defeats the purpose of this architecture. Or am I missing something here?

No, you can't. You need to populate DIMM-slots with the same color/letter first in order to use all memory channels.
And to quote the manual:

So for only 8 DIMMs, you need to populate A1, B1, C1,...
 

alex_stief

Well-Known Member
May 31, 2016
884
312
63
38
Jesus, tone it down.
Slots 0 and 1 of each bank (A,B,C,...) are color-coded.
And I am not only telling you stuff that is clearly written in the manual. I thought you already read that, so I added some additional information to make it clearer.
I am not going to search for some official looking document that explains how color-coding of DIMM-slots works or that you need to populate a memory channel in order to use it.
 
Last edited:

Frank173

Member
Feb 14, 2018
75
9
8
Tone what down? So far you made a claim or voice a hunch, I do not even know whether you own the board or not, you did not state such. Hence, before shelling out thousands of dollars I think it is more than reasonable that I like to see references to your claim. Even if the slots are color coded what makes you think the modules have to be populated as you claimed? Again, I never said you are wrong and if you cannot deal with a simple and polite request for factual backup then no need to reply.

By the way which manual? Gigabyte thought it to be appropriate to supply a 1(!!!) page sheet for the entire hardware install for this board. If you read what you mentioned elsewhere would you be so kind as to point me in that direction.

Jesus, tone it down.
Slots 0 and 1 of each bank (A,B,C,...) are color-coded.
And I am not only telling you stuff that is clearly written in the manual. I thought you already read that, so I added some additional information to make it clearer.
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,511
5,792
113
Clearing this up a bit. You want a minimum of four DIMMs, one per NUMA node. Any lower than that and cores have to go across the fabric to get to RAM which kills performance. Four DIMMs provide one DIMM per channel instead of two. You will need two on each side of the CPU. Putting them all on the same side will leave you with two NUMA nodes without locally attached memory which will destroy performance.

So 8 DIMMs is recommended. Four DIMMs on each side, alternating slots, starting at the outermost slots. These alternating slots are color coded as @alex_stief mentions. You can get by with four DIMMs, but you want 8.

On this comment
The only reason someone would buy a single CPU Epyc board is for the 128 PCIe lanes. Else a much higher clocked x1950 Threadripper or otherwise many more choices based on Xeons seem way more attractive.
I strongly disagree.

You can get single socket with up to 2TB of RAM, more than a dual Intel Xeon Scalable non-M series CPU system can handle. Beyond capacity, you get ~2x the memory bandwidth of Threadripper and you can use RDIMMs for larger RAM capacities and better error correction. You can also get 24 cores/ 32 cores reasonably priced in a single socket which can save on VMware pricing. Also, there are no currently shipping Threadripper motherboards with out of band management.

Saying the "only reason" someone would buy a single CPU EPYC board is for the 128 PCIe lanes makes absolutely no sense. The vast majority will not use 128 high speed I/O lanes. Packet is an example https://www.servethehome.com/packet-dell-emc-poweredge-amd-epyc-server-deployment-accelerating/

On using the lanes, what are you trying to build to access all of the lanes where that is an issue?
 
  • Like
Reactions: automobile

Frank173

Member
Feb 14, 2018
75
9
8
Fair point, definitely not the only reason. My bad.

I plan to use the setup for training deep learning models. Currently it is still far cheaper to run a physical setup rather than renting cloud compute resources. I plan to employ 4 Titan V GPU compute cards, a 100 megabit/s Mellanox Connectx-4 NIC, and a Highpoint 4xNVME PCIE switch card, both of which require full 16 lanes.

Could you please refer me to the manual or other resource that explains the memory configurations? I could not find it in the hardware guide that Gigabyte put up on its site for this particular board.

I know I am not as well versed on the hardware side, so, when you mention Numa nodes, is that because an Epyc CPU has 4 dies and each die would be supplied with 2 channels (as there seem 8 channels on this board)? If that was the case then I guess the only way I would proceed with this board would be if I could install half height memory modules and if a full sized GPU card would still fit on top of such half-height modules.

Clearing this up a bit. You want a minimum of four DIMMs, one per NUMA node. Any lower than that and cores have to go across the fabric to get to RAM which kills performance. Four DIMMs provide one DIMM per channel instead of two. You will need two on each side of the CPU. Putting them all on the same side will leave you with two NUMA nodes without locally attached memory which will destroy performance.

So 8 DIMMs is recommended. Four DIMMs on each side, alternating slots, starting at the outermost slots. These alternating slots are color coded as @alex_stief mentions. You can get by with four DIMMs, but you want 8.

On this comment

I strongly disagree.

You can get single socket with up to 2TB of RAM, more than a dual Intel Xeon Scalable non-M series CPU system can handle. Beyond capacity, you get ~2x the memory bandwidth of Threadripper and you can use RDIMMs for larger RAM capacities and better error correction. You can also get 24 cores/ 32 cores reasonably priced in a single socket which can save on VMware pricing. Also, there are no currently shipping Threadripper motherboards with out of band management.

Saying the "only reason" someone would buy a single CPU EPYC board is for the 128 PCIe lanes makes absolutely no sense. The vast majority will not use 128 high speed I/O lanes. Packet is an example https://www.servethehome.com/packet-dell-emc-poweredge-amd-epyc-server-deployment-accelerating/

On using the lanes, what are you trying to build to access all of the lanes where that is an issue?
 
Last edited:

Patrick

Administrator
Staff member
Dec 21, 2010
12,511
5,792
113
That makes sense on owned versus cloud.

The hardware guide says every sequential letter should be populated under the memory section. On our system that has the board, we were able to populate one DIMM per NUMA node but they may have changed this in later revisions of the firmware. Personally, I would highly suggest 8 DIMMs.
 

Frank173

Member
Feb 14, 2018
75
9
8
Thanks.

I am still confused where memory configuration information is available on Gigabyte's website. Here is the link to all manuals : MZ31-AR0 (rev. 1.x) | Server Motherboard - GIGABYTE B2B Service

Do you mind sharing where you picked those information from regarding the specific memory configuration for this particular board? In the hardware guide it literally says that "DIMM must be populated in sequential alphabetical order, starting with Bank A". Strictly linguistically speaking (sequential refers to the numeric values, and alphabetical to the letters), that would mean to populate A0 then A1 then B0 then B1 and so on,... hence I am still unsure about the particular configuration for this particular board.


That makes sense on owned versus cloud.

The hardware guide says every sequential letter should be populated under the memory section. On our system that has the board, we were able to populate one DIMM per NUMA node but they may have changed this in later revisions of the firmware. Personally, I would highly suggest 8 DIMMs.
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,511
5,792
113
That quote you have means A B C ... H per the DDR population table.

It is also just how all of the EPYC 7000 systems work regardless of vendor when there are 16 DIMM slots per CPU.
 

Frank173

Member
Feb 14, 2018
75
9
8
All right thanks for that explanation. Would you have an idea whether half height memory modules might work with full length Gpu cards that would otherwise be blocked by full sized memory modules?

That quote you have means A B C ... H per the DDR population table.

It is also just how all of the EPYC 7000 systems work regardless of vendor when there are 16 DIMM slots per CPU.
 

automobile

Member
May 16, 2017
72
14
8
44
Hi, everybody! Read all the topic, but still not sure - how should I populate memory slots on this board if I have only 8 DIMMs for the best performance:
A1B1C1D1E1F1G1H1 or A0B0C0D0E0F0G0H0 ?
And is there any gain in a performance if I populate all the slots?
 
Last edited:

alex_stief

Well-Known Member
May 31, 2016
884
312
63
38
The manual states that slots 1 have to be populated first.
I don't think you can get any significant gains here with 2 DIMMs per channel. Maximum supported memory frequency might even be lower.
 
  • Like
Reactions: automobile

kapone

Well-Known Member
May 23, 2015
1,095
642
113
There is this thing called a "pci-e slot extender", which essentially adds one slot width...except it's on the same plane. It looks like this.





It basically raises your pci-e slot "height" by approx 20mm, while still being vertical. Use one of these and add a 20mm spacer to case slot and it should clear any installed RAM?
 
Last edited:
  • Like
Reactions: Tha_14