Thanks!
I understand that it is possible to just grab a Gigabyte BIOS R16, but I seem to remember, that you have said that only the Penguin BIOS is stable. Is this not correct? That is why I am asking about the Penguin BIOS - to me, stability is more important than newer BIOS, but obviously I need a BIOS new enough to support v4.
I confirmed with Penguin on this particular model that they just use what Gigabyte develops for the server. They don't do anything special minus the logo and FRU info.
I think the core issue with v4 processors is what the server is really geared for. I'm running ESXi just fine, but I don't think these are great with Windows based on what I've seen. Mileage will vary, but the R16 BIOS has been extremely stable so far. These came from Fidelity from what I can tell - they were used for development and also some production work based on what remained in the BMC IPMI configs for domain info.
Anyways...here's my own findings as I picked up 3 and am running a hyper-converged ESXi cluster:
1) Update with 488.bin from MergePoint, then update BIOS + ME using the image.RBU file in the BIOS zip via MergePoint as well. MergePoint is no longer being developed, so make sure you try and make the software as secure as possible by turning off unneeded features.
2) The SATA ports changed between R 1.0 and R 1.1 because they redesigned them to supply power for SATA DOMs. R 1.0 of the board does not have SATA ports that can power a SATA DOM without an additional cable which you can't exactly purchase anymore. Otherwise, they are the same controller and stuff.
3) There's a secret little PCIe x16 slot right next to the power supplies that can house a NVMe boot or cache drive if you so choose. I did this for my ESXi cluster, which allowed me to dedicate the low profile x16 slot to a SFP+ card, and still leave the other two full height slots completely free for future expansion. You just need to use an adapter like
this one here.
4) You can still obtain a TPM chip for these
here if you're wanting to use a TPM and/or Secure Boot. They don't come with a chip installed, and it's a weird design specific to this server board (Gigabyte has a bunch of TPM models, and none of the currently available for sale models seem to fit this server).
5) The fans can be controlled via a
custom PWM offset within the MergePoint IPMI software. This means that you can set your own target speed for the fans, independent of any of the pre-programmed options. Once set these servers are seriously quiet. If you set them for energy efficient performance in the BIOS and your chosen operating system, even more so.
6) The mezzanine slot is functional, but it's a custom design and kind of serves no purpose in the 1U config. I obtained a compatible riser and card (Quanta 3008) as I wanted to play around, but short of a custom cable or PCB design for the riser, you sacrifice the low profile x16 slot at the moment. The riser design is pretty simple, so in theory one could design a basic PCB that pushes the attached mezzanine card closer to the RAM and still allow for low profile x16 slot to also be used for a smaller sized network card. But, on the bright side, the mezzanine slot works as expected, even if you can't order the "official" Gigabyte parts anymore for it.
7) The backplane of this server accepts SAS disks, and from what I can tell they should be able to be read by the controller onboard. You might need to play with the port the actual SAS cable is plugged into, as it seems there's two separate controllers and one is more inclined for SAS drives based on the chipset documentation.
8) There's no 5v power connections for any additional SATA drives without a custom cable that takes the power connector for the optional optical drive and splits it out into a full-sized SATA power connector. I ordered these cables to test for now, as there's not much documentation on the connector minus the fact that it is 5v and is for the optical drive. There's no additional USB headers on the board either, and any of the additional power connectors on the board are all 12v and for GPUs and other higher voltage devices.
9) If you remove the upper 1/4 of the server case (where the optical drive would reside) you can fit 4 SSDs in there with some custom mounting if you so choose. I'm waiting for that custom SATA power cable I previously mentioned to get here to finish the build, so for now I am using the front USB ports to power them.
Honestly, for the price (seller will accept a $70 best offer), this server is
epic. You're getting decently modern tech at a bargain basement price, without any HW ACL you need to really worry about, with expansion out the wazoo if you're willing to play around a little bit.
When all is said and done, this server can
technically house 4x 3.5" HDDs, 4x 2.5" SSDs, 1x Full Height PCIe x16 card, 1x Low Profile PCIe x16 card, 1x NVMe via the PCIe x16 slot next to the PSUs via
this adapter, and another full height PCIe x8 card (or x16 if you want to cut off the end of the x8 slot to fit a full x16 card)...all in 1U.
If I'm able to figure out the mezzanine card solution, then one could tack on a proper HBA or RAID card as well without sacrificing any expansion slots at all. I'm using vSAN though, so it's not really a priority, but having something a bit better at handling storage vs Intel's options would just give me additional piece of mind as ideally I would be using the two additional full height slots for a video card on each cluster member for rendering tasks.