Gigabyte R272-Z32 Review This 24x NVMe AMD EPYC 7002 Server is a Home Run

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Edu

Member
Aug 8, 2017
55
8
8
33
You say it supports "8x DIMMs at DDR4-3200 speeds or 16x at DDR4-2933"
Is that a limit of the motherboard, or is it a limit of the CPU? I mean do all EYPC systems have the same limitation?
 

101

Member
Apr 30, 2018
35
12
8
What's the intended use case for slot 7 "PCIe ESM" and the slimSAS 4i connectors labeled slink0-3?
 

Jeggs101

Well-Known Member
Dec 29, 2010
1,528
241
63
Those Storage Review guys didn't catch that the PCIe slots were not functional. I guess they didn't actually test the server.

@StevenDTX It's $2250-2275 a bunch of places online. There's even a Google shopping card with the prices from multiple shops and the ThinkMate guys have it on their site already with a configurator Gigabyte 2U Server R272-Z32

It's curious ThinkMate doesn't have the P CPUs listed but they've got the normal ones only. The STH review used almost all P's and it's a single socket so I'd think it'd work.
 
  • Like
Reactions: StevenDTX

Patrick

Administrator
Staff member
Dec 21, 2010
12,517
5,830
113
All EPYC 7002 CPUs will downclock at 2DPC. You will also see DDR4-2933 at 1DPC on most of the older Gen1 platforms.

@ari2asem the EPYC 7001 chips are missing features like PCIe Gen4. We did not get to test this platform with older chips just to get it out in a timely manner. Here is the spec sheet which may help R272-Z32 (rev. 100) | Rack Server - GIGABYTE Global

On the other hand, I would expect almost all buyers of the system to use EPYC 7002 chips in it.
 

101

Member
Apr 30, 2018
35
12
8
Is the broken forward compatability from 7001 to 7002 parts indicative of a potential small fire sale of 7001 only platforms? Seems like a great homelab niche if the logistics can be managed.
 

Edu

Member
Aug 8, 2017
55
8
8
33
All EPYC 7002 CPUs will downclock at 2DPC. You will also see DDR4-2933 at 1DPC on most of the older Gen1 platforms.
Ah yeah, I see now that all servers do the same thing. Cascade lake down clocks to 2666MHz with 2DPC.

On another note, I was thinking EPYC Rome is meant to have a single PCIe root, that would theoretically mean that DMA and RDMA work a lot better than Cascade lake (which doesn't have single root)
 

BullCreek

New Member
Jan 5, 2016
18
6
3
55
We've been evaluating the various 2U 1P EPYC solutions for use as next generation all flash ZFS storage servers and run into some problems.

Tyan TN70A-B8026 - works with Open Indiana latest - OmniOS CE Stable and Bloody spontaneously reboot without a crash dump during install
SM AS-2113S-WN24RT - all the illumos based distros including OI I tried spontaneously reboot without a crash dump during install although everything works fine with Linux/Windows

If you still have this box, any chance I could get you to try Open Indiana or OmniOSCE on it and report back? Out of the three, I like this Gigabyte on paper the best if it would work although I have to say despite being the first offering and now feeling a bit dated, the Tyan isn't bad and it seems solid at least with Open Indiana. Looks like the Gigabyte may even be cheaper than the Tyan, if Google pricing is to be believed.
 
Jul 16, 2019
45
4
8
I've looking at this server for a render farm NAS storage solution. It looks very promising.

However, Level1Techs built one of these for Linus and ran into some problems. Any insight into how to alleviate some of the issues they mentioned? How would you go about running one of these with 24 NVME drives without hitting the same bottleneck they've been hitting?


What would my actual bandwidth throughput look like with the setup I mentioned?
 
  • Like
Reactions: William

Patrick

Administrator
Staff member
Dec 21, 2010
12,517
5,830
113
Yea they are trying to use the system as a more traditional RAID box. Remember, Microsoft got over 26GB/s from each 4x PCIe Gen4 SSDs in a Rome system. There they used Windows Server 2019 with Azure Stack HCI and Storage Spaces Direct. We talked about it again recently https://www.servethehome.com/kioxia-cd6-and-cm6-pcie-gen4-ssds-hit-6-7gbps-read-speeds-each/

This is one of those areas where 24x SSDs hit other bottlenecks. Another example is that you only have a PCIe Gen4 x8 slot to get data off with this system and moving data on a 100GbE PCIe Gen4 link is not a trivial task either.

My advice here is usually if you have to ask, pay for the software/ solution that will get it done. I know that is a bad answer, but you can see what they went through.

If you just want local performance, do not use parity RAID and be very cognizant of which cores are hitting which SSDs.
 
Jul 16, 2019
45
4
8
Yea they are trying to use the system as a more traditional RAID box. Remember, Microsoft got over 26GB/s from each 4x PCIe Gen4 SSDs in a Rome system. There they used Windows Server 2019 with Azure Stack HCI and Storage Spaces Direct. We talked about it again recently https://www.servethehome.com/kioxia-cd6-and-cm6-pcie-gen4-ssds-hit-6-7gbps-read-speeds-each/

This is one of those areas where 24x SSDs hit other bottlenecks. Another example is that you only have a PCIe Gen4 x8 slot to get data off with this system and moving data on a 100GbE PCIe Gen4 link is not a trivial task either.

My advice here is usually if you have to ask, pay for the software/ solution that will get it done. I know that is a bad answer, but you can see what they went through.

If you just want local performance, do not use parity RAID and be very cognizant of which cores are hitting which SSDs.
Thank you Patrick - I somehow missed the article you linked to.

So, long story short, this is still a fantastic server for Microsoft Server Storage use given one uses approriate software and configuration. Thanks again for the quick reply!