Gigabyte MJ11-EC1 EPYC 3151 Mystery

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Yababakets

New Member
Sep 11, 2024
6
1
3
Email-ID: 1722618-1
Question:Hi,I have a Gigabyte MJ11-EC1 motherboard, which is part of the G431-MM0 rack, and I've noticed there is a spot where a LDL2743-24E10-9H 74-pin 8654 SAS female connector could potentially be soldered.Could you please confirm if this connector can be added, and if so, whether it will function out of the box or if adjustments in the BIOS would be required?The bios version is F09.Thank you for your time.Best regards, Yaba 9/19/2024 11:01 AM
Answer:Dear Yaba,

First of all, MJ11-EC1 is not the module we released on the retail market. If you purchase a server rack, please give us the model name and the serial number of the server rack instead.
Also, we do not suggest users to reform/alternate the product (including soldered devices on to the board). The process might deamge the product and fail the product from the warranty coverage. Please be aware.
Feel free to contact us again anytime if you have any feedback or need any support from us. Cases without updates in 7 days will be closed until further information is received.
Regards,
GIGABYTE











9/24/2024 10:35 AM











I think it works :))
 

Pallee

New Member
Feb 17, 2019
18
6
3
I just got 4 of these boards and planning to use 3 of them as a proxmox cluster. But as I haven't got all the hardware yet i started to play around.
Seeing that I already have an iSCSI target on my network I was wandering if anyone has managed to get iSCSI boot to work?
 

Pallee

New Member
Feb 17, 2019
18
6
3
I am trying to condense the knowledge of this thread for my own sanity creds go to all contributors in this thread, I just compiled it... here goes:

Power (my measurements):
Idle power draw is approx. <20W with 2 DIMM and 1 SATA SSD runnig proxmox with no VM with a picoPSU.
BMC power draw (main system off) is 5-6W powered by a picoPSU.

The small 4-pin power socket is from the TE ELCON Micro Power series, specifically 1-2204801-8 (female) and 2204748-2 (male). (post)
NOTE: At least one of my ATX to 4-pin adapters had a broken ground. Hence it would not power up the BMC without an EPS connector for the CPU providning ground.
  • 5V standby (Green)
  • 5V (Red)
  • PSU-OK (Grey/Purple)
  • Ground (Black)
The board runs fine with either 4/8 pin EPS for CPU power.
Both 5V and 12V are needed to boot the complete system, however, only 5V standby is required for BMC.

Hardware:
This board is originally from a G431-MM0 GPU/mining server (link)
NOTE: PCIe-slot is missing!

Noteworthy connectors:
  • JTAG serial to BMC
  • SFF-8654 4i (SlimSAS) to 4x SATA (PCIe may work, citation needed)
  • SFF-8654 8i to PCIe x8 cannot bifrucate with stock bios. See limitations here. Thanks @hmartin.
Only CPU-fan is controllable by default, See below

Memory compatibility:

LRDIMM is not supported
non-ECC not-supported (may work?)
QVL MJ11-EC0 (link)

NOTE: Opposite to most other board: first populate RAM slots marked with "1" (blue), not the ones with "0" (black).
If one or more DIMM per channel is dual-rank the memory clock may need to be reduced to 1866 MT/s.
Manually setting memory clock:
Bios Advanced --> AMD CBS --> UMC --> DDR4 --> Common Options --> OC enable --> Accept, All set to auto, except timing fix to 933
There has been reports that the BIOS does not adhere to the user specified clocks. YMMV.
Some boards have issues with dual-rank DIMMS, see this post by Andiii for workaround settings.

BMC:
The network port above the USB3.0 ports is for BMC
Default login: admin/password
JTAG serial to BMC is 3.3V, see this post for tips
Files​
NOTE: sysadmin is disabled over SSH per default PeterF: Access after BMC upgrade

BIOS:
Multithreading is disabled by default.
You need to enable SMT (Symmetric Multithreading Technology) in BIOS, it's off by default on this board:
Advanced -> AMD CBS -> Zen Common Options -> Core/Thread Enablement -> Agree -> SMTEN -> Auto
Files:​

Misc/Quirks:
The missing PCIe connector is probably a Foxconn LDL2743-24E10-9H
Some boards require boot option pcie_aspm=off in certain linux environments to not throw badDLLP and similar pcie-errors.
(I have 2 boards that do not need this, 1 board boots with some errors, and 1 board straight up refuses to boot in to proxmox.)

EPYC Zen1 have problems with PCIe ASPM. You need to deactivate it in "/etc/kernel/cmdline" with "pcie_aspm=off". After this you need to do a "update-initramfs -u" otherwise the change will not used by the system. In UEFI the ASPM settings are unfortunately missing.
Fan control
See PeterF's posts about enabeling Fan control:
Post 407273
Post 407547

3D-files
IO-Shields:
Fan adapter:
 
Last edited:

alixinne

New Member
Oct 17, 2024
1
0
1
Hello everyone!

I am trying to get this board running to upgrade my home server, however I am having trouble getting it to boot, and I'm running out of options on what to check or change. The BMC is working fine, but the system never gets past the "Please wait for chipset..." screen (code 70 in the bottom right corner). Once it gets there it reboots.

Things I've tried from reading this thread:
  • Clear CMOS
  • Updated BMC to 12.61.21
  • Updated BIOS to F09 (from this post)
  • Tried various 1x/2x/4x RAM configurations, either:
    • SK Hynix HMA82GR7AFR8N-VK 16GB (is on the MJ11-EC0 QVL)
    • Samsung M393A4K40BB1-CRC0Q 32GB (does work with this board according to this post)
    • (using the blue slots first, as described in the manual and some previous posts)
I have bought the board (with the ATX adapter) from ram-koenig on eBay, and powering it with a 300W PSU with currently only the board connected to it, no extra peripherals for now.

Does anyone here have any suggestions on where to go next from here?
 

efeschiyan

New Member
Oct 8, 2024
2
0
1
Hey there folks
The board is pretty decent and I've migrated my old ivy-bridge very "temporary" "quick-and-dirty-fix" NAS from an old HP SFF8300 that I built 7 years ago "until I get a proper board for storage some time next year".
The only serious issue I am getting is "enp4s0 / enp5s0: PCIe link lost"
Code:
Oct 18 04:40:23 srv kernel: igb 0000:05:00.0 enp5s0: PCIe link lost
Oct 18 04:40:23 srv kernel: ------------[ cut here ]------------
Oct 18 04:40:23 srv kernel: igb: Failed to read reg 0xc030!
Oct 18 04:40:23 srv kernel: WARNING: CPU: 3 PID: 719144 at drivers/net/ethernet/intel/igb/igb_main.c:745 igb_rd32+0x93/0xb0 [igb]
Oct 18 04:40:23 srv kernel: Modules linked in: btrfs blake2b_generic xor raid6_pq veth ebtable_filter ebtables ip_set ip6table_raw iptable_raw ip6table_filter ip6_tables iptable_filter nf_tables libcrc32c softdog binfmt_misc sunrpc nfnetlink_log nfnetlink intel_rapl_msr intel_rapl_common amd64_edac edac_mce_amd kvm_amd kvm irqbypass crct10dif_pclmul crc32_pclmul polyval_clmulni polyval_generic ghash_clmulni_intel sha256_ssse3 sha1_ssse3 ipmi_ssif rapl uas acpi_ipmi pcspkr ahci usb_storage ipmi_si i2c_piix4 ccp libahci ipmi_devintf ipmi_msghandler 8250_dw zfs(PO) spl(O) vhost_net vhost vhost_iotlb tap k10temp efi_pstore dmi_sysfs ip_tables x_tables autofs4 input_leds hid_generic usbkbd dm_crypt bonding tls usbhid hid xhci_pci nvme xhci_pci_renesas igb ast nvme_core xhci_hcd dca i2c_algo_bit nvme_auth mac_hid aesni_intel crypto_simd cryptd
Oct 18 04:40:23 srv kernel: CPU: 3 PID: 719144 Comm: kworker/3:2 Tainted: P           O       6.8.12-2-pve #1
Oct 18 04:40:23 srv kernel: Hardware name: GIGABYTE G431-MM0-OT/MJ11-EC1-OT, BIOS F09 09/14/2021
Oct 18 04:40:23 srv kernel: Workqueue: events igb_watchdog_task [igb]
Oct 18 04:40:23 srv kernel: RIP: 0010:igb_rd32+0x93/0xb0 [igb]
Oct 18 04:40:23 srv kernel: Code: c7 c6 03 34 72 c0 e8 0c 64 50 d2 48 8b bb 28 ff ff ff e8 a0 cf fe d1 84 c0 74 c1 44 89 e6 48 c7 c7 f8 40 72 c0 e8 cd 56 80 d1 <0f> 0b eb ae b8 ff ff ff ff 31 d2 31 f6 31 ff e9 a9 76 87 d2 66 0f
Oct 18 04:40:23 srv kernel: RSP: 0018:ffffa81e19c7bd88 EFLAGS: 00010246
Oct 18 04:40:23 srv kernel: RAX: 0000000000000000 RBX: ffff8fda49994f38 RCX: 0000000000000000
Oct 18 04:40:23 srv kernel: RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
Oct 18 04:40:23 srv kernel: RBP: ffffa81e19c7bd98 R08: 0000000000000000 R09: 0000000000000000
Oct 18 04:40:23 srv kernel: R10: 0000000000000000 R11: 0000000000000000 R12: 000000000000c030
Oct 18 04:40:23 srv kernel: R13: 0000000000000000 R14: 0000000000000000 R15: ffff8fda4b562340
Oct 18 04:40:23 srv kernel: FS:  0000000000000000(0000) GS:ffff8fe13b380000(0000) knlGS:0000000000000000
Oct 18 04:40:23 srv kernel: CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
Oct 18 04:40:23 srv kernel: CR2: 0000613aca4b31f0 CR3: 0000000195556000 CR4: 00000000003506f0
Oct 18 04:40:23 srv kernel: Call Trace:
Oct 18 04:40:23 srv kernel:  <TASK>
Oct 18 04:40:23 srv kernel:  ? show_regs+0x6d/0x80
Oct 18 04:40:23 srv kernel:  ? __warn+0x89/0x160
Oct 18 04:40:23 srv kernel:  ? igb_rd32+0x93/0xb0 [igb]
Oct 18 04:40:23 srv kernel:  ? report_bug+0x17e/0x1b0
Oct 18 04:40:23 srv kernel:  ? handle_bug+0x46/0x90
Oct 18 04:40:23 srv kernel:  ? exc_invalid_op+0x18/0x80
Oct 18 04:40:23 srv kernel:  ? asm_exc_invalid_op+0x1b/0x20
Oct 18 04:40:23 srv kernel:  ? igb_rd32+0x93/0xb0 [igb]
Oct 18 04:40:23 srv kernel:  ? igb_rd32+0x93/0xb0 [igb]
Oct 18 04:40:23 srv kernel:  igb_update_stats+0x89/0x830 [igb]
Oct 18 04:40:23 srv kernel:  igb_watchdog_task+0x134/0x8a0 [igb]
Oct 18 04:40:23 srv kernel:  process_one_work+0x16d/0x350
Oct 18 04:40:23 srv kernel:  worker_thread+0x306/0x440
Oct 18 04:40:23 srv kernel:  ? __pfx_worker_thread+0x10/0x10
Oct 18 04:40:23 srv kernel:  kthread+0xf2/0x120
Oct 18 04:40:23 srv kernel:  ? __pfx_kthread+0x10/0x10
Oct 18 04:40:23 srv kernel:  ret_from_fork+0x47/0x70
Oct 18 04:40:23 srv kernel:  ? __pfx_kthread+0x10/0x10
Oct 18 04:40:23 srv kernel:  ret_from_fork_asm+0x1b/0x30
Oct 18 04:40:23 srv kernel:  </TASK>
Oct 18 04:40:23 srv kernel: ---[ end trace 0000000000000000 ]---
Oct 18 04:40:34 srv kernel: igb 0000:05:00.0 enp5s0: NETDEV WATCHDOG: CPU: 3: transmit queue 0 timed out 7168 ms
Oct 18 04:40:34 srv kernel: igb 0000:05:00.0 enp5s0: Reset adapter
Oct 18 04:40:35 srv kernel: vmbr0: port 1(enp5s0) entered disabled state
That's a snippet from the latest proxmox 6.8 kernel.
My kernel cmdline is "pci=nommconf ahci.mobile_lpm_policy=0 pcie_port_pm=off pcie_aspm.policy=performance"
Has anyone else encountered dropping pcie link to the network interface? rmmod-ing "igb" and modprobing it again spews another stacktrace and I have to reboot the board to get the link running again. The link gets lost around 24-48 hours after boot and is actually the only issue I have with this board.
I haven't done any bios mods, I've only flashed the BMC firmware with the latest available here - 12.61.21.
 
Last edited:

Kenny

New Member
Jul 3, 2019
4
0
1
I agree with the above comments. The board is unstable and will ever benefit the stability of a true brand NAS. Constantly hanging on post code 70.
When trying to enter the bios, hang.
It does offer require 30 resets to get it even to boot.
 

Th0mas51

New Member
Apr 4, 2024
28
9
3
I can confirm that the 2 boards I have are running perfectly fine, and I'm sure other people on that thread will also confirm that everything works fine on their side.

So I think you either have a faulty board, or a faulty ram, or a not powerful enough power supply, or something has not been done correctly on your side.

There are a lot of useful information in the 24 pages of this thread, and I had to read it multiple times to understand what I had to do to get this board work smoothly, so make sure you read everything, if that's not done already.
 

Th0mas51

New Member
Apr 4, 2024
28
9
3
Where can you disable the bmc to save on power consumption?
You can search in this thread, I'm pretty sure that the question got asked already, and I think the answer is that it's not possible to disable the BMC, but please confirm by searching anyway.
 

hmartin

Well-Known Member
Sep 20, 2017
360
329
63
38
Hi,I have a Gigabyte MJ11-EC1 motherboard, which is part of the G431-MM0 rack, and I've noticed there is a spot where a LDL2743-24E10-9H 74-pin 8654 SAS female connector could potentially be soldered.Could you please confirm if this connector can be added, and if so, whether it will function out of the box or if adjustments in the BIOS would be required?
Luckily, I have been working for months to answer this and I can give you an answer much more useful than Gigabyte support.

Yes, if you solder the second connector (no small feat) and flash the MJ11-EC0 BIOS, you can have 8 additional PCIe lanes. I have done this and can confirm it works. You can bifurcate x8x8, x8x4x4, and x4x4x4x4.
MJ11-EC1-SlimSAS-U2_1.jpg

MJ11-EC1-544FLR.jpg

MJ11-EC1-4NVMe.jpg

You must be aware though that there are limitations to bifurcation:

The blog post goes into detail of everything tested, what works, and what doesn't. As others have mentioned, doing this does not make economic sense.

I have 15 extra Amphenol U10-B074-200T if anyone else wants to attempt this. I will sell them at cost (6€/ea) + shipping within EU.

SFF-8654 8i to PCIe x8 cannot bifrucate
This is incorrect, see above.

MJ11-EC1-AMI-544FLR.jpg
MJ11-EC1-AMI-4xNVMe.jpeg

Where can you disable the bmc to save on power consumption?
You cannot disable the BMC. In fact, the board will not POST until the BMC has booted.
 
Last edited:

jnrnbt.

New Member
Sep 21, 2023
16
9
3
Germany
Regarding stability: I have two of these boards since last year.

One is a HyperV server with a DC, an EX and one virtual Win11 machine, running absolutely fine since November '23.

The second one is kind of a test rig. This board has the EC0 BIOS and the SL_SAS is connected in SATA mode to 4 HDDs. Currently it serves as an iSCSI vSAN for a test HyperV Cluster. Working without any issues too.

Few weeks ago one night I really stressed the second board with probably 60-80 boots and shutdowns while trying out almost all of my PCIe cards and SFF adapters in the U2 and SL_SAS Ports. Coming up nice every time.

@hmartin Let me get this straight quick: wth? 4x 40gbit ports in this system?
 

hmartin

Well-Known Member
Sep 20, 2017
360
329
63
38
@hmartin Let me get this straight quick: wth? 4x 40gbit ports in this system?
I just used the cards to test PCIe lane width; since they're x8, low power (compared to a GPU), and inexpensive.

I doubt an EPYC 3151 can handle 4x40GBit but someone else is welcome to try :D
 

etorix

Active Member
Sep 28, 2021
136
75
28
Yes, if you solder the second connector (no small feat) and flash the MJ11-EC0 BIOS, you can have 8 additional PCIe lanes. I have done this and can confirm it works. You can bifurcate x8x8, x8x4x4, and x4x4x4x4.
Just WOW!

Let me get this straight quick: wth? 4x 40gbit ports in this system?
For the sake of mad computer science?
A few NVMe drives and possible one 10/25 GbE NIC would be nice, and now we know it's possible. The economics of doubling the cost of the board with accessories almost make sense if you count your time for nothing since it's hardly possible to find a X10SDV v.2, M11SDV or MJ11-EC0 for 200€ (or even 300€), but the soldering looks scary—and probably is.
 

jnrnbt.

New Member
Sep 21, 2023
16
9
3
Germany
Yes, my main server is a M11SDV (but the 3251 version) and I got it for about EUR 850, that would be more than one EUR 60 EC1, a lot of adapters and cables, and even possible hours of pain would be paid. So I don’t see it’s that much out of economic sense.

My vSAN is running 4x SATA HDD, 2x SATA SSD and a Dualport Fujitsu 82599 10GbE at 2.0x4. More than sata would already max out the bus.

With working bifurcation this could turn out much more fun, and because these 10/40 LOMs with adapters seem to be really inexpensive… that got me thinking. But I am absolutely sure I’ll kill the board the very second I try to solder something onto it
 
Last edited:

Pallee

New Member
Feb 17, 2019
18
6
3
Luckily, I have been working for months to answer this and I can give you an answer much more useful than Gigabyte support.

Yes, if you solder the second connector (no small feat) and flash the MJ11-EC0 BIOS, you can have 8 additional PCIe lanes. I have done this and can confirm it works. You can bifurcate x8x8, x8x4x4, and x4x4x4x4.
Impressive feat! I have updated my summary post. Thank you for the hard work!

Out of curiosity, did you use a hot air station and solder paste or old school with iron and loads of flux?
 

hmartin

Well-Known Member
Sep 20, 2017
360
329
63
38
Out of curiosity, did you use a hot air station and solder paste or old school with iron and loads of flux?
I do not do board rework professionally, so it may be possible to solder this without hot air, but I wouldn't try. None of my equipment is very expensive: an 858D, MHP30 as preheater (didn't do a great job), and a KSGER T12 with J02 tip.

1000098271.jpg
The only photo I have of the procedure, the rest of the time my hands were full. I ended up removing the CPU heat sink and blowing hot air across the top. It just wasn't possible to get the top of the board above 183C with bottom heating and I was worried about causing permanent damage going too hot. 200C doesn't seem hot enough to seriously melt the plastic of the connector.

The plastic of the SFF-8654 actually extends over the pads, so it's not possible to drag solder the contacts to the pads as they're obscured.

As you can see from the photo in the other post, I removed the metal housing from around the connector. To remove the solder from the anchor points, I used a micro drill bit (IIRC 0.6mm). I haven't added the metal housing yet, too concerned I will disturb some pads on the connector :eek:

With a very small tip, it was possible to fix some iffy connections after the hot air reflow, but such a small tip is unable to dump enough heat into any pin on the ground plane.

With working bifurcation this could turn out much more fun, and because these 10/40 LOMs with adapters seem to be really inexpensive… that got me thinking. But I am absolutely sure I’ll kill the board the very second I try to solder something onto it
Bifurcation works with the existing SFF-8654, you will only get x4x4 though. You need to treat the PCIe x8 slot as you would on any normal motherboard, meaning the bifurcating adapter needs to have a clock buffer. I haven't found any bifurcation cards for x8->x4x4 that aren't designed for NVMe, but if you found one it should work.
 
Last edited: