Gigabyte MF51-ES0 for an SSD NAS?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Alexdi

Member
Dec 7, 2012
38
3
8
I'm considering this board for an nVME-based NAS. The appeal is that it has built-in dual 10Gbe, remote management, a bucket of PCIe lanes that I hope can be bifurcated, and it's relatively inexpensive. I'm also considering ASRock X570D4U-2L2T and B550D4U-2T, both of which have bifurcation, PCIe 4.0, and potentially lower power draw and better single-core performance than the Intel platform, though cost more and would support maybe 6 SSDs in total.

Does anyone have experience, positive or negative, with the Gigabyte board?
 

Tech Junky

Active Member
Oct 26, 2023
518
182
43
Wouldn't be my first choice but, they seem to be making better boards these days. My issue is how they divide slots/lanes.

For the 6*drives though it depends on how much capacity you're talking about and whether or not you use M2 or U.x. I started off last year with the same intent and went with a MOBO that could hold at least 5 M2 drives 1 OS / 4 Raid and then switched gears to U drives instead because you can get double the capacity for half the price. Instead of using 4*4TB M2 drives ($800) I went with a single 15.36TB U.3 ($950) and don't have to deal with Raid anymore. I had been running spinners in R10 for a few years and just wanted to get more speed, less physical space needed, lighter weight, and some other ideas. The 2.5" NVME U.3 though weighs a few ounces vs the ~12# of spinners. The U drive of course could be a single point of failure but, since it's a bit more stringent as designed for Enterprise / DC use it should hold up just fine. However, I stated with some Micron versions that both took a crap and lost their partition tables within a couple of hours for one and less than a week on the other. I switched to Kioxia though and haven't had an issue since install (~6mo). Another thing is Micron's tend to run hot where the Kioxia in my case just sits around 40C.

I rebuilt from an Intel setup to AMD though as well. I'm just using an X670E board / 7900X / 32GB. If I were to want to run multiple U drive I would get a quad M2 adapter card / 4*M2 oculink adapters / 4*cables. The 2 additional drives would have to be used with the M2 sockets on the board and 2 of them would work at Gen4+ speeds with the 3rd being Gen3.
 

Alexdi

Member
Dec 7, 2012
38
3
8
I think we're on parallel paths. My current setup is a X9SCM with 6x6TB in RAID-6 on an LSI 9361, fronted by PrimoCache with a 970 Pro.

When it works, it's fast, though hotter and noisier than I'd like. But I'm tired of troubleshooting drive failures and sense errors and multi-day rebuild times. The RAID exists for speed, reliability, and to have a single partition. Separate U.2 drives strike me as better at the first two and I don't care about the last enough to stomach the complication it introduces.

The new plan is to start with a single 15.36TB U.2 that gets a nightly backup to a 16TB hard drive. After I wrote this post, I realized I didn't really need the extra lanes of the Gigabyte board.

I'm not aware of any X670E boards with remote management. That's a mandatory feature for me. Do you know any of anything preferable to that AsRock X570?
 

Tech Junky

Active Member
Oct 26, 2023
518
182
43
I'm not aware of any X670E boards with remote management. That's a mandatory feature for me. Do you know any of anything preferable to that AsRock X570?
Remote management as in ILO / IPMI? I don't bother with that sort of thing and just SSH into the box. If it requires a hands on approach it's easily accessed. There are some "console" boxes you could probably get for remote KVM control.

The board I went with is an ASR but it's the PG Lightning. The issue I had when looking a boards is the crappy splits the other OEMs made with the slots. Plus, I got the board used on Amazon aka return for $160. I figured out the reason they were selling so cheap though was the AGESA BS that AMD uses causes gremlins in how things operate. It took trying a few versions before solving the issues that crept up.

single 15.36TB U.2 that gets a nightly backup to a 16TB hard drive
KISS. Make life simple. I don't bother with backups though. After running the raid for 5+ years w/o issues... I have dual M2's in my laptop for redundancy though and the file server aspect is just to put it where the bandwidth is located both LAN/WAN as it's the router. I only really dove into Raid as something to do rather than a need but, having the space lead to hoarding of data because if you have the space you tend to see it as a challenge to fill it up.
 

JimmyBlack

New Member
May 13, 2024
3
0
1
Copy and paste from another thread here.


Hey mate, signed up just to reply to your post.

I Grabbed the Gigabyte MF51-ES0. It's trash.

It works okay for basic stuff, but:

- BOTH 10gbe ports are useless. They fail every few days. (They go offline, and you need to reboot the server to bring them back) I think the chipset overheats. I had to install a SFP+ card and use a SFP connection to my switch. No more problems.

- There is no bifurcation. It's broken, badly. I contacted gigabyte about it, and they basically told me to go away, don't care. I've updated the bios etc.

- There is no resizable bar, so when I dropped an Intel Arc A310 card into it for hardware decoding (Emby) it just alerts you non stop in the notification list until you turn it off.

The board works..... sort of. I'm stuck with it now. I am now looking at the Asus range but I want to know if the bifurcation works. Can't really see much testing. Only those random reviewers that don't really review.
 

bayleyw

Active Member
Jan 8, 2014
326
107
43
I feel like lack of ReBAR and bifurcation are not the board's fault, given they never told you these features were supported. As for the 10gbe dropping, once every several days means it's not a thermal problem, since nothing on the board should have a thermal time constant of several days.
 

JimmyBlack

New Member
May 13, 2024
3
0
1
You "feel" wrong.

They did advertise that the boards support bifurcation, and the bios has bifurcation options enabling the user to specify how to split the slots, but it doesn't work at all. Even with a gigabyte NVME M2 card. I've tried dual and quad slot cards. Nothing. Only the first drive will work. Their support was very unhelpful.

Regarding the 10gbe ports - It's a board and chipset problem. Without warning the ports fail in device manager and can only be woken back up with a reboot. Rebooting a server every few days, isn't really an option. The 1gig ports work fine, as does the IPMI system. Multiple searches showed a significant amount of people having the same issue, even on other branded boards.

VROC doesn't work either and is advertised. I bough the raid key and that did not work, why? Because bifurcation doesn't work and I cannot plug 4 x NVME drives in.

And Finally, ReBAR is a simple bios update. There is no reason why it should not be supported.

They've ditched bios support / updates, so this board is dead in the water, full of false advertising, and has zero support.
 

Markess

Well-Known Member
May 19, 2018
1,178
796
113
Northern California
Support on this Gigabyte board is very very very poor. Plus, the "manual" is literally a single page Quick Start Guide that wasn't even labeled well.

That said, bifurcation works on mine. Only one of the x16 slots, but it worked for me. @JimmyBlack, which BIOS and BMCfirmware versions do you have installed?

Mine came to me with BIOS Firmware R01 and I needed to bump it to R07 before Bifurcation was on the table.

Through trial an error, I've determined that (for me at least) the Megarac BMC firmware was super flaky. Even when it seemed to work, the system had fault LEDs lit. The Gigabyte branded one (v1.98), while older and not as full featured or aesthetically pleasing, was much more stable.

At the same time, I've also got a SUpermicro X11SRM-VF running the same CPU (Xeon W-2135). Its better behaved & all 3 PCIe slots support bifurcation (when I got it, I stuffed it full of nine m.2 NVMe drives and ran it that way for a few months). The MATX form factor is more limiting, and I've not bothered with the oCulink ports at all because cables would cost me more than I paid for the board to begin with. But, its been very stable.
 
Last edited: