2u Epyc to run up to 24 U.2

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

TrumanHW

Active Member
Sep 16, 2018
253
34
28
Probably run up to 15x NVMe (U.2) drives.
I'll keep it sync'd to a spinning array ...
Use some special vDevs ...
Probably start off with 4x 8TB NVMe drives + some optanes

And a special vDev for small files of optane only.

I've thought about making a nested set of RAIDz2 zVols as constituent vDevs to make a 3x zVol so I'd have to lose and fail to recover one to lose data.

My anticipation is it won't be as fast as I'd think for some reason ? Between this and my spinning array syncing it, (and a DLT eventually) ... seems pretty safe.

I'm here just looking for an Epyc 2u with 24 slots .... but I'm always open to hearing how dumb I am. :)
 

Monoman

Active Member
Oct 16, 2013
410
160
43
hey @TrumanHW I'm confused as to what this post is about.

Are you asking for advice? if yes wrong forum.
Are you wanting to purchase something? need to checkout the rules for formatting and clarity. :)

Either way, good luck!
 

USER189364

Member
Jul 17, 2020
40
20
8
What generation are you looking for? I have seen some Dell R7425 servers for a fairly decent cost (I mean, i bought one, so...), but most Rome/Milan gen stuff is still a pretty penny.

As for setting it up with NVME U.2 drives, Not really crazy at all - With all the PCIe bandwidth these chips have, why not use it for fast storage?

I have heard that when setting up your zfs pool layout, there are some settings you should tinker with due to how fast the nvme drives can be, but that would go for any nvme pool, not specific to your planned build.
 
  • Like
Reactions: TrumanHW

i386

Well-Known Member
Mar 18, 2016
4,245
1,546
113
34
Germany
I don't understand the first paragraphs :D

If you're looking for a chassis: supermicro has the 216 chassis for 24x 2.5" devices in the front + 2x 2.5" devices in the rear via optional rear bay kits (there is one for u.2/u.3 devices now!). Replace the existing sas backplane with something like the BPN-NVMe3-216N-S4 (Supermicro BPN-NVMe3-216N-S4 24-port 2U (SC216/219) NVMe direct-attac) and you have a chassis for up to 26x u.2/u.3 devices.
The mainboard choice is more complicated. Some boards have occulink ports onboard, other will need "hba"s/retimer add on cards.
 

NablaSquaredG

Layer 1 Magician
Aug 17, 2020
1,344
820
113
I'm here just looking for an Epyc 2u with 24 slots .... but I'm always open to hearing how dumb I am. :)
Dual Socket:
Supermicro AS-2125HS-TNR (H13 -> Genoa 9004 / likely Bergamo 9005)
Supermicro 2124US-TNRP (H12 -> Rome 7002 / Milan 7003)
Supermicro 2123US-TN24R25M (H11 -> Naples 7001 / Rome 7002)

Single Socket:
Supermicro 2114S-WN24RT (H12 -> Rome 7002 / Milan 7003)
Supermicro 2113S-WN24RT (H11 -> Naples 7001 / Rome 7002)

All 2U, 24*2.5" NVMe U.2
 

TrumanHW

Active Member
Sep 16, 2018
253
34
28
As for setting it up with NVME U.2 drives, Not really crazy at all - With all the PCIe bandwidth these chips have, why not use it for fast storage?
I've heard when setting up zfs layouts, there are settings you should tinker with due to how fast nvme drives can be; which is a general nvme pool heuristic, not specific to your plan.
FIRST (before the quote) ... I'M LOOKING FOR EITHER AN:
- R7515 with 24 NVMe slots, or,
- R7525 with 24 NVMe slots.

PS, I don't need many cores ... 8 should be fine, and fast is supposed to be good for SMB...right ?


Needs CPUs the NVMe controllers, the 24 how swappable SFF slots in front
I don't care about the RAM unless it's an amount I'll def use, as I may use Optane DIMMs (if that's not intel only).

Bothered by my yammering despite my goal being to get a deal on the above BOLD Red items at a good price?
Just ignore the below.

_______________________

I have an 80TB that has 52TB usable which I'll reserve for RAID data recoveries. I'll whittle the contents of the 80TB, use a pair of SATADOMs to boot my arrays (T320s with upgraded CPUs) and maybe swap out the CD-ROM bays for an extra pair of 10TB IBMs (in the 80TB / 52TB) in case in case I get a big job... and to ensure I can rSync everything from the NVMe array to the 8 x 4TB 32 ... to a 10 x 6 in RAIDz2 to have 48TB (leaving about 25% free if full).

Even using 8 x 7.68 will yield ~42TB usable ... I'll use a pair of 5800x to speed up ingress, and use a mirrored pair of 905P for small files as a vDev ... using 16 of the available 25 slots (SATADOMs in mirror again)...or, replace the drives with bigger SSDs one by one and sell off the 7.68s.

Right now I actually have 3.78TB drives which I'd be replacing for the 7.68s.

I'm just hoping there's no BS with TrueNAS 13.x for latency with NVMe's.

One thing I didn't mention, I'm tempted to try out dRAID; see what is does for a 70TB rebuild time.
(assuming I add two more 10TB IBM HGST's) ...

Though I may settle for a 10th Gen HPE ... but for uniformity sake (& bc I'm familiar with iDrac) I'd prefer Dell