Gigabyte R180-F34 1U Server (2011-3) $94-109 + Shipping

ptcfast2

Member
Feb 1, 2021
38
58
18
Never did figure out what that other weird half size PCIE connector is that's next to the actual full size slots, though.
It's a custom mezanine slot for a riser Gigabyte no longer sells. The normal purpose of a slot like that is more direct integration with the motherboard itself vs going over the PCI Express bus. It's usually used for network adapters but also sometimes storage controllers.

When it is used on this motherboard it occupies the half height PCI Express slot. This was very stupid design choice. They revised it for later servers to match how most other manufacturers do things. In most cases the mezzanine adapter has its own slot and most manufacturers have a cutout on the motherboard's PCB that allows the card to fit below a PCI Express slot in a 1U configuration. It's a shame they didn't do that here, as they almost had a 1U server with 5(!) expansion slots. That is if you count the unused one where you can fit a single NVMe drive only...

I actually found a riser that works in the slot (Dell makes one) as I was curious and attached a Quanta mezzanine storage adapter and it actually worked without issue. However, losing the flexibility of the PCI Express slot isnt worth it at all in this server.
 
Last edited:
  • Like
Reactions: Firebug24k

Firebug24k

Member
Apr 12, 2017
70
42
18
39
Hah, good info. Yeah, the layout makes it pretty much useless, it's a shame.

I really like these servers though, for the price they can't really be beat.
 

JErmolowich

New Member
Mar 16, 2019
18
9
3
North Carolina
Anybody up and running with 64GB RAM sticks? I have some Micron 64GB 2933 RDimms that are not recognized in this system and noticed that the QVL list doesn't have any 64GB sticks either.
 

ptcfast2

Member
Feb 1, 2021
38
58
18
Anybody up and running with 64GB RAM sticks? I have some Micron 64GB 2933 RDimms that are not recognized in this system and noticed that the QVL list doesn't have any 64GB sticks either.
I think you would need to use LRDIMM modules if you are going for 64GB per DIMM. Are you running v3 or v4 Xeons?
 

JErmolowich

New Member
Mar 16, 2019
18
9
3
North Carolina
I have a pair of 2690v4 CPUs. I have done a little more research and all the boards with the C612 chipset that I can find do not support 64GB RDIMMS. So I find it hard to believe that Gigabyte magically figured out how to get them to run rather than the website specs are incorrect.
 

ptcfast2

Member
Feb 1, 2021
38
58
18
I have a pair of 2690v4 CPUs. I have done a little more research and all the boards with the C612 chipset that I can find do not support 64GB RDIMMS. So I find it hard to believe that Gigabyte magically figured out how to get them to run rather than the website specs are incorrect.
Yeah I think you would need LRDIMMs for C612. RDIMMS won't work at that size on this guy sadly.
 

gb00s

Active Member
Jul 25, 2018
685
240
43
Poland
I have a pair of 2690v4 CPUs. I have done a little more research and all the boards with the C612 chipset that I can find do not support 64GB RDIMMS. So I find it hard to believe that Gigabyte magically figured out how to get them to run rather than the website specs are incorrect.
Yeah I think you would need LRDIMMs for C612. RDIMMS won't work at that size on this guy sadly.
If I'm not mistaken, Asus Z10PA-D8 (MemorySupport) and Z10PE-D16 take RDIMMs in 64GB size. You just need to check if the specs comply with the data from the manufacturer.
 

Markess

Well-Known Member
May 19, 2018
890
534
93
If I'm not mistaken, Asus Z10PA-D8 (MemorySupport) and Z10PE-D16 take RDIMMs in 64GB size. You just need to check if the specs comply with the data from the manufacturer.
Well yes, they take 64GB DIMMS. But, but as @ptcfast2 said, they will need to be LRDIMMs. The 64gb listed in the linked document are all LRDIMMs.
 
Last edited:

gb00s

Active Member
Jul 25, 2018
685
240
43
Poland
That's why I mentioned ...
You just need to check if the specs comply with the data from the manufacturer.
Often times these lists say RDIMM which on the manufacturer list are tehn described as LRDIMM.
 

Zalouma

Member
Aug 5, 2020
39
21
8
For what it's worth, I asked Penguin directly about BIOS stuff a few weeks ago (if they developed their own past what Gigabyte offers). They said they use the same BIOS Gigabyte provides. They just change the logo for branding purposes. :rolleyes:
Yes I agree, only they just had this R13 with modded stuff, this is why I wanted it to test things here, Thanks again

I ended up buying more of these servers, own around 35 of these now, last batch comes with no caddies though, but they take any Dell C1100 caddies no problem

 
  • Like
Reactions: JErmolowich

tinfever

New Member
Nov 3, 2018
5
2
3
Sorry to bump this old thread but does anyone know the idle power consumption on these? Will they run with a single CPU to reduce power draw?

I'm considering two of these for OPNsense firewalls (primary requirement is redundant PSU and cheap) and the other contender is a Supermicro SYS-6017R-M7UF (Dual Xeon E5 V1/V2. Would run with single CPU). Running the numbers, going with the Gigabyte system would be ~$40 more expensive overall but if it has lower power consumption by like 20W (although I don't know what the Supermicro will draw), that would break even in the long run. Also, the Gigabyte system is more modern and has better expansion options.

It would be quite hard to get parts for the Gigabyte system if a PSU failed though. That would blow any potential cost savings...

Gigabyte vs Supermicro.PNG
 
  • Like
Reactions: Samir

eptesicus

Member
Jun 25, 2017
102
18
18
34
I had 4 of these but just sold them to upgrade, so I can't speak on power draw. I had an E5-2640v3, 192GB RAM, all drive bays filled, and 10GbE in each, so my idle reading would not be comparable to how you're configuring yours.

I ran each of them with single CPUs without issues. You lose one of the PCI slots running 1 CPU, but everything else is good to go.

I can't remember what the model the PSUs are, but they're not proprietary and you dhould be able to find replacements if necessary. You can also just run 1 if you wanted. This will cut down a bit on power usage if you don't need redundancy.

Also, I would certainly go with this over the Supermicro just because the hardware is newer and you get DDR4 RAM with these. And given how cheap v4 CPUs are becoming now, I'd upgrade the E5-2620v3 to a v4.
 
  • Like
Reactions: tinfever and Samir

curley

New Member
May 3, 2020
6
6
3
Sorry to bump this old thread but does anyone know the idle power consumption on these? Will they run with a single CPU to reduce power draw?

I'm considering two of these for OPNsense firewalls (primary requirement is redundant PSU and cheap) and the other contender is a Supermicro SYS-6017R-M7UF (Dual Xeon E5 V1/V2. Would run with single CPU). Running the numbers, going with the Gigabyte system would be ~$40 more expensive overall but if it has lower power consumption by like 20W (although I don't know what the Supermicro will draw), that would break even in the long run. Also, the Gigabyte system is more modern and has better expansion options.

It would be quite hard to get parts for the Gigabyte system if a PSU failed though. That would blow any potential cost savings...

View attachment 22378
I just happen to have a spare chassis that has a single E5-2630L with 16GB (2x8gb) and an Intel 3500 80GB ssd. No other PCI cards installed.
According to my Kill-A-Watt and the IPMI power management, the server at boot with fans running full is around 185 watts.

With the fans set to the minimum (-127) in the IPMI, running idle with both power supplies running ESXi 7.02 is around 57 watts and a single supply is averaging 45 watts. I did not change any power management settings in ESXi from the defaults.

1649518063310.png
Hope this helps.
 
  • Like
Reactions: Samir and tinfever