EU Gigabyte Mainboard MC12-LE0 Re1.0 AMD B550 AM4 Ryzen

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

SlowmoDK

Active Member
Oct 4, 2023
211
130
43
Do not remember changing any memory settings.
Just installed the CPU, memory and it booted and detected all the memory, so it should work with default values.
Later updated BMC and BIOS through BMC web interface, and verified the machine using complete memtest run.
Thanks for posting ! it helped with diagnostics ;)
 

PANiCnz

New Member
Apr 22, 2022
27
5
3
Has anyone got any tips for picking up cheap DDR4 UDIMM for this board? Lots of RDIMMS locally but struggling for UDIMMS. Not too phased if its not ECC, can live without that.
 

Szala

New Member
Mar 23, 2024
23
3
3
Has anyone got any tips for picking up cheap DDR4 UDIMM for this board? Lots of RDIMMS locally but struggling for UDIMMS. Not too phased if its not ECC, can live without that.
If you have available in your country, it works well with Goodram 3200MHz ECC Udimm. You have below codename of sticks.

https://allegro.pl/oferta/pamiec-serwerowa-goodram-16gb-1x16gb-3200mhz-ddr4-ecc-15532617707 16GB @ 50USD
https://allegro.pl/oferta/goodram-u...200mhz-pc4-25600-w-mem3200e4d832g-15688289441 32GB @ 88USD
 

Crash_0verride

New Member
Oct 3, 2023
11
4
3
I use a 4x4x8 for nvme mirror for proxmox and the 8x slot for an HBA passed through to TrueNAS, but depending on drive numbers the NVME to sata solution is better for lower power
nvme to sata adapter wont be a bottleneck? internal nvme slot is only pci3 x1 - so 1 Gb/s total.
i am building now my new proxmox node. say goodbye to my x9sri watt hungry...
and i cant figure out best config for me.
idea is to use x4 slot for mellanox card or maybe x550-at2. hope x4 would be enough for one 10g uplink. btw how how this card? worth it to add on rad additional cooler?
To use x4x4x8 bifurcation card for x16 slot for 2 nvme...
and next i dont know.. should i find hba, or grab nvme to sata for internal slot. atm i have 4 4tb hdd, but plan to switch them to 2 18tb drives later on..

Goal, to use proxmox with truenas onboard, in addition to couple vms with containers.
Give some advice please.
 

SlowmoDK

Active Member
Oct 4, 2023
211
130
43
nvme to sata adapter wont be a bottleneck? internal nvme slot is only pci3 x1 - so 1 Gb/s total.
i am building now my new proxmox node. say goodbye to my x9sri watt hungry...
and i cant figure out best config for me.
idea is to use x4 slot for mellanox card or maybe x550-at2. hope x4 would be enough for one 10g uplink. btw how how this card? worth it to add on rad additional cooler?
To use x4x4x8 bifurcation card for x16 slot for 2 nvme...
and next i dont know.. should i find hba, or grab nvme to sata for internal slot. atm i have 4 4tb hdd, but plan to switch them to 2 18tb drives later on..

Goal, to use proxmox with truenas onboard, in addition to couple vms with containers.
Give some advice please.
For 4-6 rust spinners one lane is fine and nvme to sata seems like a nobrainer.. no sane zfs layout will bottleneck you

I personally run x710-da4 (4x sfp+) in the x4 slot in one box and Mellanox connect-x 4 in another both runs without issues

If you only plan to use a max of 4 spinners, then u can go nvme to sata, and get a 4x nvme adaptor for the x16 and not x4x4x8 :)

Since i run my truenas baremetal i can also use the onboard sata for a total of 14 sata ports, with a cheap LSi 9207-8I

I've tested several newer models of lsi but keeps comming back to the 2008 based cards for pure tested stability (if only sata drives)

if you run newer sas then as minimun go 9300 serries based on 3008 chip
 

SlowmoDK

Active Member
Oct 4, 2023
211
130
43
idea is to use x4 slot for mellenox card. hope x4 would be enough for one 10g link. btw how how this card? worth it to add on rad additional cooler?
That depends on your case airflow, i have an extra noctua fan placed inside my case to cool both HBA and NIC in my boxes

probably not needed in real server cases or something like the fractal torrent

but both HBA's and most servergrade NIC's need some airflow to not overheat
 

Crash_0verride

New Member
Oct 3, 2023
11
4
3
For 4-6 rust spinners one lane is fine and nvme to sata seems like a nobrainer.. no sane zfs layout will bottleneck you

I personally run x710-da4 (4x sfp+) in the x4 slot in one box and Mellanox connect-x 4 in another both runs without issues

If you only plan to use a max of 4 spinners, then u can go nvme to sata, and get a 4x nvme adaptor for the x16 and not x4x4x8 :)

Since i run my truenas baremetal i can also use the onboard sata for a total of 14 sata ports, with a cheap LSi 9207-8I

I've tested several newer models of lsi but keeps comming back to the 2008 based cards for pure tested stability (if only sata drives)

if you run newer sas then as minimun go 9300 serries based on 3008 chip
thnx. for start yes 4 rust spinners, then change to 2 hdd and maybe 2 sata ssd max.
currently i dont have any 10g nic or network equipment but want to order now together all parts for it. and 10g nic as futureproof.
Thnx mate for hba advice.
Currently i run truenas baremetal on it (moved my installation from x9sri) with all data and containers. but Truenas annouced that they will dropoff containers support in next release, so no scale apps((
 

SlowmoDK

Active Member
Oct 4, 2023
211
130
43
Currently i run truenas baremetal on it (moved my installation from x9sri) with all data and containers. but Truenas annouced that they will dropoff containers support in next release, so no scale apps((
that seems like great news to me :) it has been a hot mess since day1 imo ... and lets Ixsystems focus on the storage part

Spin up a debian/docker VM and manage with portainer like a normal person, if you really need to run stuff inside Truenas :)
but you changing to proxmox as hypervisior hopefully does the same even better hehe
 

Crash_0verride

New Member
Oct 3, 2023
11
4
3
That depends on your case airflow, i have an extra noctua fan placed inside my case to cool both HBA and NIC in my boxes

probably not needed in real server cases or something like the fractal torrent

but both HBA's and most servergrade NIC's need some airflow to not overheat
my build is in fractal node 804. so there is in upper part of it in and out fans alongside cpu fan, but probably will need to add 1 intake at bottom part for nic and hba airflow
 

Crash_0verride

New Member
Oct 3, 2023
11
4
3
that seems like great news to me :) it has been a hot mess since day1 imo ... and lets Ixsystems focus on the storage part

Spin up a debian/docker VM and manage with portainer like a normal person, if you really need to run stuff inside Truenas :)
but you changing to proxmox as hypervisior hopefully does the same even better hehe
yes, but imho i liked it as AIO solution for me)), for me worked great)) but yep its time to prepare to become like normal person hehe))
 

SlowmoDK

Active Member
Oct 4, 2023
211
130
43
my build is in fractal node 804. so there is in upper part of it in and out fans alongside cpu fan, but probably will need to add 1 intake at bottom part for nic and hba airflow
I never looked at the node 804 .. nice case and it seems that it's very easy to get some good airflow going with plenty of room for both extra intake and exaust fans :D

My Truenas is in a Define XL ... chunky motherfuxxer
 

nilfisk_urd

Member
Feb 14, 2023
32
11
8
idea is to use x4 slot for mellanox card or maybe x550-at2. hope x4 would be enough for one 10g uplink. btw how how this card?
Mellanox Connect-X3 and Intel X550 (with newest firmware) don't support ASPM, so your CPU will use more power. Get a Connect-X4 or Intel X710 instead.
 
  • Like
Reactions: SlowmoDK

Cruzader

Well-Known Member
Jan 1, 2021
710
709
93
idea is to use x4 slot for mellanox card or maybe x550-at2. hope x4 would be enough for one 10g uplink.
I got connectx4 lx in x2 and can saturate that 10g (and get to 6-7g on 2nd port).
In a x4 you can max out both ports without any issues.


Im still waiting for my last package from ali that ofc has the risers and heatsinks :rolleyes:
 

Crash_0verride

New Member
Oct 3, 2023
11
4
3
Mellanox Connect-X3 and Intel X550 (with newest firmware) don't support ASPM, so your CPU will use more power. Get a Connect-X4 or Intel X710 instead.
thnx for advice. will check those card on ali


I got connectx4 lx in x2 and can saturate that 10g (and get to 6-7g on 2nd port).
In a x4 you can max out both ports without any issues.


Im still waiting for my last package from ali that ofc has the risers and heatsinks :rolleyes:
thnx for info, btw which heatsinks did you choose?
 

Cruzader

Well-Known Member
Jan 1, 2021
710
709
93
I grabbed these as the single fan option since going in 4u cases with solid airflow.

had last 2sets of 4x32gb 2666 u ecc arrive today, so now its just the ali package with the heatsinks and risers missing.
 

rvdm

New Member
Jan 9, 2022
4
0
1
Has anyone got this board working with a Ryzen 5 5500 ? Mine is on BIOS F6, I wonder if this CPU is unsupported or if a BIOS update would help...
 

_Dejan_

Member
Aug 18, 2022
39
15
8
Hi everyone,
Im read thread and still have some questions...
I currently run Supermicro H11DSi motherboard, Epyc 7601 CPU, 128GB RDIMM RAM(8x16GB), Mellanox CX354(Using only 1x10G) and because of high power consumption, lack of redundancy and to powerfull CPU (It runing few VM's: Sophos FW, HomeAssistant OS, TrueNAS, IPTV Server, GPS Track Server, UNIFI controller and one Win 10 VM). Even with only 16 cores enabled CPU is in Proxmox most of time around 4% with some spikes to 20%... First I think about MJ11-EC1 motherboard but because of a lot of problems with PCIe adapters Im start looking for alternative and Im found this one...

1.) What are supported CPU-s with ECC support? Im think to buy AMD Ryzen 5 5600 or 5600X because I can get new one for better price than used on ebay and if I compare CPU score with current Epyc 7601 CPU it have a lot of better single core score. Single core performance is important in my case because Sophos XG firewall when decode SSL/TLS traffic can decode single connection only on single core...
If Im read spec corectlly it support ECC and PCIe4.0 . Will this CPU support running 4x4x4x4x and 4x4x8x bifurcation? Im currently not sure which option will need...
2.) Is mixing pcie3.0&4.0 in bifurcation possible? For example running intel x710-da2(PCIe 3.0 8x) and 2x Samsung PM9A3(PCIe4.0 4x) ? What will happen in this case? All devices run in PCIe3.0 mode? What will this mean for PM9A3? Will speed be limited to 4GB/s or to half of max for example 1,92GB device instead of Read: 6800/2700 to 3400/1350 or will be limited to 4000/2700?
3.) Can someone post how IOMMU grouping of device is done on this MB? I need passtrough SATA controller to TrueNAS VM and I would like if it share . In worst case I will use SATA(I need it only for 2x HDD) in M2 slot...
 

mackspain

New Member
Dec 17, 2023
15
13
3
Hi everyone,
Im read thread and still have some questions...
I currently run Supermicro H11DSi motherboard, Epyc 7601 CPU, 128GB RDIMM RAM(8x16GB), Mellanox CX354(Using only 1x10G) and because of high power consumption, lack of redundancy and to powerfull CPU (It runing few VM's: Sophos FW, HomeAssistant OS, TrueNAS, IPTV Server, GPS Track Server, UNIFI controller and one Win 10 VM). Even with only 16 cores enabled CPU is in Proxmox most of time around 4% with some spikes to 20%... First I think about MJ11-EC1 motherboard but because of a lot of problems with PCIe adapters Im start looking for alternative and Im found this one...

1.) What are supported CPU-s with ECC support? Im think to buy AMD Ryzen 5 5600 or 5600X because I can get new one for better price than used on ebay and if I compare CPU score with current Epyc 7601 CPU it have a lot of better single core score. Single core performance is important in my case because Sophos XG firewall when decode SSL/TLS traffic can decode single connection only on single core...
If Im read spec corectlly it support ECC and PCIe4.0 . Will this CPU support running 4x4x4x4x and 4x4x8x bifurcation? Im currently not sure which option will need...
2.) Is mixing pcie3.0&4.0 in bifurcation possible? For example running intel x710-da2(PCIe 3.0 8x) and 2x Samsung PM9A3(PCIe4.0 4x) ? What will happen in this case? All devices run in PCIe3.0 mode? What will this mean for PM9A3? Will speed be limited to 4GB/s or to half of max for example 1,92GB device instead of Read: 6800/2700 to 3400/1350 or will be limited to 4000/2700?
3.) Can someone post how IOMMU grouping of device is done on this MB? I need passtrough SATA controller to TrueNAS VM and I would like if it share . In worst case I will use SATA(I need it only for 2x HDD) in M2 slot...
Hi, regarding 3, I don't have access to print the IOMMU grouping right now, but, the SATA controller shares with one of the USB headers, so you cannot pass that through under proxmox.

I was also unsuccessful trying your 2nd suggestion - I could not pass through a ASM1166 m2 6 sata expansion card, in the onboard m2 slot.

HBA works great.
 
  • Like
Reactions: _Dejan_