Raid on a Gigabyte AMD Server


New Member
Dec 5, 2019
Hi everyone, I have a question regarding a setup that I am building for an educational VDI environment. Below is the list of my current hardware that I have been approved for buying...

Gigabyte R282-Z92 (
AMD EPYC 7742 (x2 so a total of 128 cores)
64GB DDR4 LRDIMM (x32 sticks for a total of 2TB)
Micron 9300 PRO 3.84TB (x24 drives to fill up the front U.2 slots) (
Mellanox ConnectX6-VPI Dual port pci-e 4.0 (
Mellanox SN3800 Switch (

This server will be in its own room which is about 200 feet away from the lab that will be using it. It has a dedicated incoming fiber line for internet access. My question is the following. Being that most of the parts for this server is completely overkill for a VDI environment, I have the ability to play around and push it to its limit. Being that the Gigabyte server has the possibility of adding 24 hot-swap U.2 drives , I would like to see how i could create a Raid setup (that the system would boot from). I am considering a Raid 10 or even Raid 50 setup , but i have the following concern .. will I be able to have the CPU's take advantage of more than the 12GB/s SAS raid cards i keep finding online from LSI ? Is there such a thing as a PCI-E 4.0 raid controller (besides the consumer level Gigabyte Auros PCI-E 4.0 card)? Essentially if I do a RAID-10 setup i should see approximately a 24x increase in read speeds and 12x increase in write speeds. Being that the Micron 9300 has a 3.4 GB/s read and write speed, would it be possible to actually increase this number any higher than 12GB/s as most PCI-E 3.0 Raid controller cards allow ? Is there something else I should do that would be more beneficial ? Like maybe creating a separate storage node and compute node? Thank you all for your help in helping me figure this out .


Active Member
Apr 2, 2015
These LSI (Broadcom) raid controllers you mention are PCIe3 x8 cards. Meaning max throughput for each card is 8GB/s. SAS3 speed is 12Gb/s per link (~1.2GB/s). But with NVME drives, SAS speed is not relevant to you.

For putting 24 NVME drives into single raid, I think best solution is to use some kind of software raid (I dont know if VROC is supported on these epyc servers).

here is review of similar server