Gigabyte R180-F34 1U Server (2011-3) $94-109 + Shipping

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

eptesicus

Active Member
Jun 25, 2017
151
37
28
35
Running some E5v3 and some E5v4 on Single CPU node each, pretty much everything you will through at it will work from the E5-2600v3/v4 family

V4 just need latest Bios
Thanks! Do you happen to know if a E5-1600 v3/v4 CPU would work in a single CPU slot? I didn't see anything in the docs.
 

Default User?

New Member
Sep 5, 2020
16
10
3
Thanks!


Can anyone attest to the noise? I may have to replace fans.
is not bad for a 1u server, they are very quiet, put them under power saving mode.
I am comparing them to an r7910 and md1000 when I say they are quiet, you will still hear them a bit and when rebooting.
 
  • Like
Reactions: eptesicus

kfriis

Member
Apr 8, 2015
54
7
8
48
Running some E5v3 and some E5v4 on Single CPU node each, pretty much everything you will through at it will work from the E5-2600v3/v4 family

V4 just need latest Bios
@Zalouma appreciate all your knowledge of these servers.

I am considering getting one of them from the original seller, but will not know what BIOS version it comes with. Do I need a v3 CPU just to flash it so it support v4? Since I dont have any v3 CPUs laying around, I would prefer not to get one just to flash since I plan on running v4s.

Thanks!
 

Zalouma

Member
Aug 5, 2020
55
22
8
@Zalouma appreciate all your knowledge of these servers.

I am considering getting one of them from the original seller, but will not know what BIOS version it comes with. Do I need a v3 CPU just to flash it so it support v4? Since I dont have any v3 CPUs laying around, I would prefer not to get one just to flash since I plan on running v4s.

Thanks!
My pleasure glad to help, for v4 you will just upgrade bios and this step is completely safe so there is nothing to worry about honestly as this can be performed directly from the BMC if you dont wish to do it from the command line, the seller has many revision from what we bought some were very old bios R8 up to R13, none came with R15 or R16 as from the ones we bought they were all penguin version which has their own modded bios, they wont try to share any of their bios with you unless you have a service contract with them or it was purchased through them, we tried

Regarding your last question, these servers are awesome, you dont need to have RAM or CPU installed to flash bios or BMC, you can do it without installing any, check here

Easy BIOS Update
Because updating your BIOS to a newer version can be a troublesome experience, GIGABYTE has developed this integrated function (no additional software to install) that lets you update the BIOS of your server / workstation motherboard(s):



  • Without having to install CPUs, memory, drives, operating system, etc.
  • Without having to power on the system (but a power supply must be connected)
  • One board at a time via our IPMI 2.0 web interface
  • Multiple boards simultaneously via command line
 

kfriis

Member
Apr 8, 2015
54
7
8
48
Thanks @Zalouma !

Now I feel bad - I could have figured the part about the Easy BIOS update out myself. I will try to be more precise with my questions.

What version Penguin BIOS is required for v4 CPU support?

So it seems that it is a little bit of luck of the draw whether you receive a BIOS version that supports v4. And if you receive an older version, that only supports v3, you cannot get a newer version from Penguin, which makes my concern about flashing the BIOS moot (that is the easy part!). The hard part - and the real issue - then becomes how to get a hold of a new Penguin BIOS file. Is my understanding here correct?

Thanks again.
 

Default User?

New Member
Sep 5, 2020
16
10
3
My pleasure glad to help, for v4 you will just upgrade bios and this step is completely safe so there is nothing to worry about honestly as this can be performed directly from the BMC if you dont wish to do it from the command line, the seller has many revision from what we bought some were very old bios R8 up to R13, none came with R15 or R16 as from the ones we bought they were all penguin version which has their own modded bios, they wont try to share any of their bios with you unless you have a service contract with them or it was purchased through them, we tried

Regarding your last question, these servers are awesome, you dont need to have RAM or CPU installed to flash bios or BMC, you can do it without installing any, check here

Easy BIOS Update
Because updating your BIOS to a newer version can be a troublesome experience, GIGABYTE has developed this integrated function (no additional software to install) that lets you update the BIOS of your server / workstation motherboard(s):



  • Without having to install CPUs, memory, drives, operating system, etc.
  • Without having to power on the system (but a power supply must be connected)
  • One board at a time via our IPMI 2.0 web interface
  • Multiple boards simultaneously via command line
what file did you flash from the fw download to the bmc to upgrade the bios?
Also what file is for updating the ipmi?
 

Zalouma

Member
Aug 5, 2020
55
22
8
Thanks @Zalouma !

Now I feel bad - I could have figured the part about the Easy BIOS update out myself. I will try to be more precise with my questions.

What version Penguin BIOS is required for v4 CPU support?

So it seems that it is a little bit of luck of the draw whether you receive a BIOS version that supports v4. And if you receive an older version, that only supports v3, you cannot get a newer version from Penguin, which makes my concern about flashing the BIOS moot (that is the easy part!). The hard part - and the real issue - then becomes how to get a hold of a new Penguin BIOS file. Is my understanding here correct?

Thanks again.
To be honest i havent tried, I would normally upgrade to the latest R16 if its going to be a v4 CPU installed on it, on the BIOS log on Gigabyte the only thing I see they start to mention Haswell (v3) and Broadwell (v4) is from the Bios R6, the one that comes with the server is mainly Penguin bios which can have different revision number so honestly speaking cant confirm as I am not sure on this one sorry. I am using the Gigabyte R16 which is the one available on their download website, Penguin wont let allowed us when we asked them to their download portal as its only if you bought it through them or have a service contract with them but you dont need it as you can flash it to R16 and it will work great, just few small tweaks depends on what OS you will run


what file did you flash from the fw download to the bmc to upgrade the bios?
Also what file is for updating the ipmi?
First I start by upgrading the BIOS & ME from BMC, from the bios image i downloaded from Gigabyte website (R16 version) inside RBU folder image.RBU as its hit or miss, then if it didnt take it, i would just choose Bios from dropdown and do image.bin under the server_bios_md90-fs0_r16 > SPI_UPD folder this is as far as i remember as we did it back in May 2020 getting close to a year ago

For IPMI use the 488 file under latest version, and would advise you do it over the BMC Web GUI
 
Last edited:

kfriis

Member
Apr 8, 2015
54
7
8
48
Thanks!

I understand that it is possible to just grab a Gigabyte BIOS R16, but I seem to remember, that you have said that only the Penguin BIOS is stable. Is this not correct? That is why I am asking about the Penguin BIOS - to me, stability is more important than newer BIOS, but obviously I need a BIOS new enough to support v4.

So are you saying that you are just upgrading to latest R16 Gigabyte BIOS (to ensure v4 support) and then everything is fine?
 

Zalouma

Member
Aug 5, 2020
55
22
8
Thanks!

I understand that it is possible to just grab a Gigabyte BIOS R16, but I seem to remember, that you have said that only the Penguin BIOS is stable. Is this not correct? That is why I am asking about the Penguin BIOS - to me, stability is more important than newer BIOS, but obviously I need a BIOS new enough to support v4.

So are you saying that you are just upgrading to latest R16 Gigabyte BIOS (to ensure v4 support) and then everything is fine?
Yes correct like stable out of the box on what we were using, the R16 is stable but just not out of the box, would just need some tweaks
 

ptcfast2

Member
Feb 1, 2021
36
52
18
Thanks!

I understand that it is possible to just grab a Gigabyte BIOS R16, but I seem to remember, that you have said that only the Penguin BIOS is stable. Is this not correct? That is why I am asking about the Penguin BIOS - to me, stability is more important than newer BIOS, but obviously I need a BIOS new enough to support v4.
I confirmed with Penguin on this particular model that they just use what Gigabyte develops for the server. They don't do anything special minus the logo and FRU info.

I think the core issue with v4 processors is what the server is really geared for. I'm running ESXi just fine, but I don't think these are great with Windows based on what I've seen. Mileage will vary, but the R16 BIOS has been extremely stable so far. These came from Fidelity from what I can tell - they were used for development and also some production work based on what remained in the BMC IPMI configs for domain info.

Anyways...here's my own findings as I picked up 3 and am running a hyper-converged ESXi cluster:

1) Update with 488.bin from MergePoint, then update BIOS + ME using the image.RBU file in the BIOS zip via MergePoint as well. MergePoint is no longer being developed, so make sure you try and make the software as secure as possible by turning off unneeded features.

2) The SATA ports changed between R 1.0 and R 1.1 because they redesigned them to supply power for SATA DOMs. R 1.0 of the board does not have SATA ports that can power a SATA DOM without an additional cable which you can't exactly purchase anymore. Otherwise, they are the same controller and stuff.

3) There's a secret little PCIe x16 slot right next to the power supplies that can house a NVMe boot or cache drive if you so choose. I did this for my ESXi cluster, which allowed me to dedicate the low profile x16 slot to a SFP+ card, and still leave the other two full height slots completely free for future expansion. You just need to use an adapter like this one here.

4) You can still obtain a TPM chip for these here if you're wanting to use a TPM and/or Secure Boot. They don't come with a chip installed, and it's a weird design specific to this server board (Gigabyte has a bunch of TPM models, and none of the currently available for sale models seem to fit this server).

5) The fans can be controlled via a custom PWM offset within the MergePoint IPMI software. This means that you can set your own target speed for the fans, independent of any of the pre-programmed options. Once set these servers are seriously quiet. If you set them for energy efficient performance in the BIOS and your chosen operating system, even more so.

6) The mezzanine slot is functional, but it's a custom design and kind of serves no purpose in the 1U config. I obtained a compatible riser and card (Quanta 3008) as I wanted to play around, but short of a custom cable or PCB design for the riser, you sacrifice the low profile x16 slot at the moment. The riser design is pretty simple, so in theory one could design a basic PCB that pushes the attached mezzanine card closer to the RAM and still allow for low profile x16 slot to also be used for a smaller sized network card. But, on the bright side, the mezzanine slot works as expected, even if you can't order the "official" Gigabyte parts anymore for it.

7) The backplane of this server accepts SAS disks, and from what I can tell they should be able to be read by the controller onboard. You might need to play with the port the actual SAS cable is plugged into, as it seems there's two separate controllers and one is more inclined for SAS drives based on the chipset documentation.

8) There's no 5v power connections for any additional SATA drives without a custom cable that takes the power connector for the optional optical drive and splits it out into a full-sized SATA power connector. I ordered these cables to test for now, as there's not much documentation on the connector minus the fact that it is 5v and is for the optical drive. There's no additional USB headers on the board either, and any of the additional power connectors on the board are all 12v and for GPUs and other higher voltage devices.

9) If you remove the upper 1/4 of the server case (where the optical drive would reside) you can fit 4 SSDs in there with some custom mounting if you so choose. I'm waiting for that custom SATA power cable I previously mentioned to get here to finish the build, so for now I am using the front USB ports to power them.

Honestly, for the price (seller will accept a $70 best offer), this server is epic. You're getting decently modern tech at a bargain basement price, without any HW ACL you need to really worry about, with expansion out the wazoo if you're willing to play around a little bit.

When all is said and done, this server can technically house 4x 3.5" HDDs, 4x 2.5" SSDs, 1x Full Height PCIe x16 card, 1x Low Profile PCIe x16 card, 1x NVMe via the PCIe x16 slot next to the PSUs via this adapter, and another full height PCIe x8 card (or x16 if you want to cut off the end of the x8 slot to fit a full x16 card)...all in 1U.

If I'm able to figure out the mezzanine card solution, then one could tack on a proper HBA or RAID card as well without sacrificing any expansion slots at all. I'm using vSAN though, so it's not really a priority, but having something a bit better at handling storage vs Intel's options would just give me additional piece of mind as ideally I would be using the two additional full height slots for a video card on each cluster member for rendering tasks.
 
Last edited:

eptesicus

Active Member
Jun 25, 2017
151
37
28
35
Anyways...here's my own findings as I picked up 3 and am running a hyper-converged ESXi cluster:
I'm intrigued in your setup if you'd care to showcase it some more. I ordered 4 for a vSAN cluster myself.

I'm only going to run a single CPU, so I can only use the full-height PCI cards, but not the little PCI slot next to the PSUs (I'd love to do what you did with an NVMe boot drive right there). For boot, I'll just be using a Samsung FIT USB drive. I'd rather do PCIe or dual SDs like my old Ciscos or my R730XD, but I don't have that option with a single CPU. I'm adding a single M.2 NVMe PCI adapter and 500GB WD Black NVMe drive for vSAN cache, and a 10GbE Mellanox ConnectX-3 card in those PCI slots, as those are the only ones that'll work, I believe, with a single CPU. I'll have 2x 960GB SSDs in the first two 3.5" bays (with 2.5 to 3.5" adapters) for the vSAN storage tier.
 

ptcfast2

Member
Feb 1, 2021
36
52
18
I'm intrigued in your setup if you'd care to showcase it some more. I ordered 4 for a vSAN cluster myself.
I'm running Xeon 2630L v3 processors for now, due to low power and the fact you can snag them for $30 each at the moment on eBay. So that allowed me to go dual with minimal investment, and can upgrade to beefy processors in the future.

As for RAM, just 16GB Hynix registered DDR4. Total of 64GB per server to start, with with ability to expand up as needed.

Using a Quad Interface Masters Niagra SFP+ card in the low profile slot, but waiting for the SFP+ switch to get here this week before I actually get things running on those interfaces vs the onboard ones.

I'm repurposing a bunch of shucked 8TB Western Digital drives for the HDDs that I was using in my workstation (12x), with 2 drive groups per cluster member with 1 SSD + 2 HDD. The vSAN Cache SSDs are open box Micron SEDs that I was able to get on eBay for about $65/per at 600GB last week. They have crazy high write endurance and were actually sold as part of vSAN solutions a few years back - so right around the era this server was built.

It's been a bit of multi-week project trying to figure out the best solution for these as building the right foundation is important. Not lots of documentation on these guys, so figuring out their limits and behaviors has been part of the journey. Making sure you think of everything, and stay cost effective, is key if you want a reliable solution that actually works.

Honestly the most expensive part of all this has been the storage (at least for my purposes). I needed something stable and redundant, and being able to be this redundant/futureproof for under 2K has been awesome. I might pick up an extra server just for spare parts at this rate, as the power supplies alone go for $50. For $110 I could just have an entire spare server sitting around and have all the parts I need.

20210215_160005.png
 

eptesicus

Active Member
Jun 25, 2017
151
37
28
35
I'm running Xeon 2630L v3 processors for now, due to low power and the fact you can snag them for $30 each at the moment on eBay. So that allowed me to go dual with minimal investment, and can upgrade to beefy processors in the future.

As for RAM, just 16GB Hynix registered DDR4. Total of 64GB per server to start, with with ability to expand up as needed.

Using a Quad Interface Masters Niagra SFP+ card in the low profile slot, but waiting for the SFP+ switch to get here this week before I actually get things running on those interfaces vs the onboard ones.

I'm repurposing a bunch of shucked 8TB Western Digital drives for the HDDs that I was using in my workstation (12x), with 2 drive groups per cluster member with 1 SSD + 2 HDD. The vSAN Cache SSDs are open box Micron SEDs that I was able to get on eBay for about $65/per at 600GB last week. They have crazy high write endurance and were actually sold as part of vSAN solutions a few years back - so right around the era this server was built.

It's been a bit of multi-week project trying to figure out the best solution for these as building the right foundation is important. Not lots of documentation on these guys, so figuring out their limits and behaviors has been part of the journey. Making sure you think of everything, and stay cost effective, is key if you want a reliable solution that actually works.

Honestly the most expensive part of all this has been the storage (at least for my purposes). I needed something stable and redundant, and being able to be this redundant/futureproof for under 2K has been awesome. I might pick up an extra server just for spare parts at this rate, as the power supplies alone go for $50. For $110 I could just have an entire spare server sitting around and have all the parts I need.

View attachment 17551
Thanks for sharing! I'm anxious to get mine in, and I've been tempted to get a spare unit myself just in case... I got 4, thinking that if one goes upside down, I can still run a 3-node. I ran a 3-node vSAN cluster out of UCS-C240-M3s servers for a couple years, sold them, got an R730XD, and regret now going to a single server and am glad these popped up. I was going to go with some Lenovo Tiny or Optiplex Micros, but can't go wrong with the price of these. I have 512GB of 2133Mhz RAM in my R730XD, and plan to rob 64 or 96GB of RAM for each of these 1u servers.

You're spot on being able to do this under $2k. I already have 6 of 8 SSDs and the RAM, but I'm in just under $1,700.

What kind of workload are you running? How are you liking the E5-2630L v3 CPUs with said workload?
 

ptcfast2

Member
Feb 1, 2021
36
52
18
What kind of workload are you running? How are you liking the E5-2630L v3 CPUs with said workload?
I'm going to be using it for a new project I'm working on that will be slightly heavy on video transcoding. I anticipate one of the first things I will be adding are video cards to the cluster, but the CPUs should suffice for now. Once transcoding is offloaded to those GPUs, then the processors will get some breathing room again and at that point, once usage eventually creeps back up from doing non-video rendering tasks, they will be replaced with the Xeon E5-2695 v4.

I would say that getting six E5-2630L v3 processors ($30 each) for less than the price of a single E5-2695 v4 ($220 each) made the most sense for now. Still plenty of power for the beginnings of said project, and plenty for general lab stuff should that project not pan out. (Really the project was a good excuse to get back into proper homelab stuff again :p).

Being able to obtain some single slot Quadro/Tesla cards in the future and use Nvidia vGPU between all nodes vSphere 7 is going to be pretty sweet. Since the board has dedicated PCI-E power connectors at least I can at least just shove some cheaper cards in these without vGPU support until some of the more modern Tesla/Quadro cards when the eventually come down from their extreme prices to something a bit more...uhm...tolerable.

Now, I would say my only complaints about this server are minor and more of a hindsight issue:

1) The mezzanine card could have been designed so it was interchangeable for onboard LAN. It's in the right spot, and could easily be a way for you to upgrade the LAN chipset. I really wish it had 10 Gig anything instead of having to use a dedicated card. From what I could tell Gigabyte kind of fixed this for the next generation server boards they offered.

2) Going back to the mezzanine issue, Gigabyte also sold storage controllers for the slot. However, they hijacked the low profile x16 slot should you choose to use a mezzanine card due to the location. So, what was the point exactly? A customer should just use the low profile x16 slot at that point and call it a day. I do think it's possible that this board is better suited for a 2U chassis, because at that point the mezzanine slot and other PCI Express slot placements make more sense.

3) The AS2400 BMC is capable of being upgraded to support better IPMI software (Supermicro did it) but Gigabyte was like nah. We'll be stuck with Java forever on these which is lame, as servers from this generation either shipped with or moved to an HTML5 KVM console and a more modern IPMI interface. Same with the BIOS - doubt we'll be seeing anything newer come out unless Intel has another security issue with their processors.

4) They offered a 1U version of this chassis with 8x 2.5" drives, but the case design technically would allow for a 1U w/ 4x 3.5" and 4x 2.5" if you just added some hot swap drives up top. I wish more manufactures did this as it would be easy to offer proper high density hyper-converged boxes. But also, usually the design of most 1U boxes won't allow for this, so I think they missed a pretty cool opportunity with this board and the case it has - it's great for HCI stuff.

5) There's no PCI Express bifurication options that I could find in the BIOS. Not much control over the PCI Express features on the board in general, minus some oddball things. More of a heads up for anyone wanting to run NVMe drives. It supports booting from them without any fuss, but unless it has some secret auto bifurication option, you might be SOL if you're looking to use a PCI Express card that supports multiple NVMe drives.

Anyways, I really thought these oddball servers through to make sure it they were the right choice. I have spent plenty of time with them so far. I love them and I'm sure you will too. :D
 
Last edited:
  • Like
Reactions: nedimzukic2

Zalouma

Member
Aug 5, 2020
55
22
8
Awesome setup thank you for sharing, I use something like this to avoid the power problem for adding more drives and work great, you do have 2 sata port to plug cable spot next to the first PCI


So you can easily add 2 drives, this server has alot of PCI slot available 2 on the full size and 2 on the low profile plus the one hidden so You can load 4 additional drives easy (2 SATA M.2 and 2 NVMe) or you can just use 2 x 2 Supermicro Dual NVMe card and it will work

This will give you total 8 drives without the need to remove modify anything
 

ptcfast2

Member
Feb 1, 2021
36
52
18
This will give you total 8 drives without the need to remove modify anything
It's a good solution if you don't plan on using any of the expansion slots. You can use a SAS --> SATA breakout cable + the 2 SATA ports and just use M.2 SATA PCI Express Adapter cards that way and you can technically add a bunch of M.2 SATA drives that way - up to 6-8 more depending on your config and chosen cards. NVMe drives would be more limited due to the lack of bifurication, but I will test out if it suddenly "appears" or works just by putting a card in - I have a spare Asus Hyper M.2 Gen 4 I was about to install in my workstation, but good time to test this. Doubt it will work, but you never know!

Edit: It won't work without modifying the BIOS. Which (if there's some interest) I can provide a modded BIOS file which has some hidden settings enabled (would allow you to configure bifurication) as well as the latest microcode updates for Intel.
 
Last edited: