F2uSoe - initial hurtle passed, 2 Intel NVMe Cages in 1 Server

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Naeblis

Active Member
Oct 22, 2015
168
123
43
Folsom, CA
@dba caught my attention in his DCDW. @Patrick showed me how to cram 4 2.5" NVMe drives into a Intel server. Now my goal is to turn 10 NVMe drives and 12 SSDs into a smoking fast SAN. With 1.2 TB NVMe drives and 2TB SSDs, it would have almost 18 TBs of storage.

The 1st part of the plan is to make sure I can use two of the Intel A2U44X25NVMEDK2 kits (4x 2.5 NVMe drive cages), and that worked. .

IMG_1890.JPG
The Adaptec raid controller (in HBA mode) gave me acceptable perf for the tier 2. Once the 6 other NVMe drives show up, ill post some numbers with 10 drives in a R0 just for kicks. Hopefully with the 40GB Switch and the cards i just purchased from @PithyChats ill be able to get that speed to my Hyper-V hosts and SQL servers .

edit- we just finished routing the cables correctly and the lid closes :)

IMG_1889.JPG

pic.png
Thom
 
Last edited:

whitey

Moderator
Jun 30, 2014
2,766
868
113
41
You suck, lol j/k, been watching your 'ramp-up' in anticipation for some 'awesomeness'
 

Chuntzu

Active Member
Jun 30, 2013
383
98
28
Let me know how this goes as I have 8 nvme drives currently running on server 2012r2
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,518
5,821
113
@Thom - awesome that you got two of those backplanes in that server!
 
  • Like
Reactions: Chuntzu

Patrick

Administrator
Staff member
Dec 21, 2010
12,518
5,821
113
@Thom - just so you know. I got back from a run and tried to give you a second like. Unfortunately my admin powers do not allow me to do so.

Please keep us updated with progress on this build.
 
  • Like
Reactions: Naeblis

Naeblis

Active Member
Oct 22, 2015
168
123
43
Folsom, CA
Some updates

Started with an Intel R2208WTTYS purchased from my favorite eBay seller kalleyomalley.

Added 2 4 NVMe Drive cages A2U44X25NVMEDK, had to remove the DVD/USB/Power Connector in the 3rd slot. That required I get an A2USTOPANEL to power on my server. The server chassis does not have the standoffs for mounting the kit, so I used some double sided tape.

IMG_1891.JPG

Originally I wanted 24 SSDs however the throughput was not there. You can get 24 SSDs in the 2u chassis by adding 2 of ICY DOCK Tough Armor MB998SP-B 8x cages. They are a very tight fit and take some convincing to get in there. With an RMS3JC080 as JBOD for 8 drives and the Adaptec RAID 71605 Storage Controller 2274400-R, for the other 16 I had 24 SSDs and 24 Ports.






IMG_1892.JPG


24 disks gave me results that were not much better than the 12 disk JBOD (3 in each of the 4 slot backplanes). So I isolated the 8 disks from the RMS3JC080 and the 16 disks from the Adaptec. Just for kicks I did a software raid 0 of the two arrays with dismal results. The results from 24 disks and two Adaptec controllers was fine. However in this set up I don’t have available PCI slots. After the 2 riser replacements there are only 3 PCIe 8x V3 and one PCIe 4x v2. Both the mezzanine slots on the motherboard are used (JBOD raid and Ethernet)

jbod and 24 disk results.PNG


To connect the 4 cages to the 4 ports (2 from the RMS3JC080 and 2 from the motherboard) I needed two more cables. I have had bad luck with cables on ebay so I got an AXXCBL730HDHD. I replaced the single 1100w power supply with 2 FXX750PCRPS, it was cheaper to get 2 of the 750s than an extra 1100w. The Ethernet I/O module slot will have an AXX2FDRIBIOM used as a 40GBe rather than IB


When using the Icy Docks I had to be more creative and used 4 Adaptec Cables (2280000-R). Looks really messy when I had them outside. When inside there were silver cables running everywhere and it looked like spaghetti

IMG_1893.JPG

Yes I am using Samsung EVO (120) as my boot- no flames please. I did not see the “Your next boot drive” post in the deals section till after I had purchased them.

As for CPUs currently I have 2 E5-2620 v3. Depending on testing results I might change them with 2 e5-2673 V3.

For memory I am using 2x Hynix 32GB (HMA84GL7MMR4N-TF).

For the NVMe SSDs there will be 8 x 2.5” Intel 750s (SSDPE2MW012T4R5) and 2 PCI versions of the same drive (SSDPEDMW012T401). While typing this it just occurred to me that instead of 2 Intel 750 PCI versions, I could use 2 Supermicro AOC-SLG3-2E4R cards and get 4 more 2.5” drives in the last two PCIe slots. That would be a total of 12 NVMe Drives. You could mount them upside down on the fan shroud.

Parts list

· Server - R2208WTTYS
· 2 NVMe Drive Bays - A2U44X25NVMEDK
· Power Button - A2USTOPANEL
· JBOD Raid Module - RMS3JC080
· Mini SAS HD cables - AXXCBL730HDHD
· 64 GB Memory - 2 HMA84GL7MMR4N-TF
· 8x 1.2TB 2.5" NVMe Drives - SSDPE2MW012T4R5 (6 in route)
· 2x 1.2TB PCIe NVMe Drives - SSDPEDMW012T401
· 2 E5-2620 v3
· 2 boot drives
· Power Supply - 2 FXX750PCRPS
· Dual port 40GB Ethernet AXX2FDRIBIOM
· Dual port 40GBe Ethernet MCX354A-FCBT
· Tier 2 SSD Storage 16 500GB Samsung

investment +- $13,500

Tier 1 Storage NVMe 5.5TB(r10), Tier 2 Storage SSD 3.8TB
Theoretical performance 26GBs and 2.3 Million IOPS
Network 4x 40G Ports (not tested) using SMB and RDMA

Max Config for @dba
If used locally as a SQL server you could take out the MCX354A-FCBT and put in 1 more AOC-SLG3-2E4R for a total of 14 Drives and a theoretical 36GBs & 3million IOPS in a R10 or 6 million IOPS if used as a R0 all in a 2U enclosure.

investment +- $24,700
Tier 1 Storage NVMe 7.7TB (r10), Tier 2 Storage SSD 15.1TB
network limited to 2 40G (56IB) ports, 2 x 10gbe (build in x540) and 4 x 1gbe (pcie x4 v2) because of PCIe Slots.

@Patrick once I get the rest of my drives do you have 3 AOC-SLG3-2E4R cards and 6 other drives so we could try to set a world record for a 2U enclosure? I live in Folsom and could come down to the bay area. I believe that is where you are located.

Well anyways that is all I have for now.
 
Last edited:

Naeblis

Active Member
Oct 22, 2015
168
123
43
Folsom, CA
@Thom - just so you know. I got back from a run and tried to give you a second like. Unfortunately my admin powers do not allow me to do so.

Please keep us updated with progress on this build.
There i posted a detailed parts / built for you to like :p
 

whitey

Moderator
Jun 30, 2014
2,766
868
113
41
Holy sh|t...DEEEEEP pockets...now I am feeling like a 'broke-ass' once again :-D

Good show sir!
 

PigLover

Moderator
Jan 26, 2011
3,186
1,545
113
Fabulous/ambitious build!

You'll be running near a number of the underlying system limits here (total bandwidth across the PCI bridge, memory throughput, PCI link throughput to the 4x40gig NICs, etc). Because your filer application is really a big bad memory pump you'll be hitting some resources twice for each transaction (pull bits off the SSD, fill memory, packet formatting, drain memory out to the NIC, repeat really fast). It will be really interesting to see where you bottleneck - and if you can figure out why.

One thing to pay attention to: your decision to use 2 big RAM sticks rather than 4 smaller ones might hold you back a bit. Max memory throughput of the E5-2620v3 is 59GB/s. You'll need every bit of that to get close to warming up those 10x NVMe drives. But you only get that speed in a 4-channel memory configuration. In your 2-way memory config you only get half, and I don't think 24.5BG/s will do the job.
 

Naeblis

Active Member
Oct 22, 2015
168
123
43
Folsom, CA
Holy sh|t...DEEEEEP pockets...now I am feeling like a 'broke-ass' once again :-D

Good show sir!
same perf less space

· Server - R2208WTTYS
· 2 NVMe Drive Bays - A2U44X25NVMEDK
· Power Button - A2USTOPANEL
· JBOD Raid Module - RMS3JC080
· Mini SAS HD cables - AXXCBL730HDHD
· 64 GB Memory 4 or 8 dimms - (see post by @PigLover )
· 14x 1.2TB 2.5" NVMe Drives - SSDPE2MW012T4R5 (6 in route)
· 2 E5-2620 v3
· 2 boot drives
· Power Supply - 2 FXX750PCRPS
· Dual port 40GB Ethernet AXX2FDRIBIOM
· Tier 2 SSD Storage 16 250GB

$8200 3.5 TB total

1.7TB Nvme and 1.8Tb of SSD
 
Last edited:

Naeblis

Active Member
Oct 22, 2015
168
123
43
Folsom, CA
Fabulous/ambitious build!

You'll be running near a number of the underlying system limits here (total bandwidth across the PCI bridge, memory throughput, PCI link throughput to the 4x40gig NICs, etc). Because your filer application is really a big bad memory pump you'll be hitting some resources twice for each transaction (pull bits off the SSD, fill memory, packet formatting, drain memory out to the NIC, repeat really fast). It will be really interesting to see where you bottleneck - and if you can figure out why.

One thing to pay attention to: your decision to use 2 big RAM sticks rather than 4 smaller ones might hold you back a bit. Max memory throughput of the E5-2620v3 is 59GB/s. You'll need every bit of that to get close to warming up those 10x NVMe drives. But you only get that speed in a 4-channel memory configuration. In your 2-way memory config you only get half, and I don't think 24.5BG/s will do the job.
i have other memory that i can try out. I do have higher end CPUs that give the whole 2133mhz speed of the memory. the mother board has 24 slots so i can put in more as needed. and of course will be documented. right now i just need the additional drives.

I was thinking about the PCI lanes this morning. As currently planned I will be using 66 lanes
14 dives would be 74
mother board lanes.PNG
 
  • Like
Reactions: Patrick

Patrick

Administrator
Staff member
Dec 21, 2010
12,518
5,821
113
@Patrick once I get the rest of my drives do you have 3 AOC-SLG3-2E4R cards and 6 other drives so we could try to set a world record for a 2U enclosure? I live in Folsom and could come down to the bay area. I believe that is where you are located.

Well anyways that is all I have for now.
I am in the bay area, but unfortunately no AOC-SLG-2E4 cards. I would not want to use the R version on this platform given what I know about it from my 7x NVMe one.
 

gigatexal

I'm here to learn
Nov 25, 2012
2,913
607
113
Portland, Oregon
alexandarnarayan.com
@dba caught my attention in his DCDW. @Patrick showed me how to cram 4 2.5" NVMe drives into a Intel server. Now my goal is to turn 10 NVMe drives and 12 SSDs into a smoking fast SAN. With 1.2 TB NVMe drives and 2TB SSDs, it would have almost 18 TBs of storage.

The 1st part of the plan is to make sure I can use two of the Intel A2U44X25NVMEDK2 kits (4x 2.5 NVMe drive cages), and that worked. .

View attachment 1127
The Adaptec raid controller (in HBA mode) gave me acceptable perf for the tier 2. Once the 6 other NVMe drives show up, ill post some numbers with 10 drives in a R0 just for kicks. Hopefully with the 40GB Switch and the cards i just purchased from @PithyChats ill be able to get that speed to my Hyper-V hosts and SQL servers .

edit- we just finished routing the cables correctly and the lid closes :)

View attachment 1128

View attachment 1129
Thom

Those 4k numbers look low.
 

MiniKnight

Well-Known Member
Mar 30, 2012
3,073
974
113
NYC
Hey @Naeblis can I just ask why make your own chassis rather than just getting one? I'd imagine it to be cheaper to buy a pre-built one.
 

Naeblis

Active Member
Oct 22, 2015
168
123
43
Folsom, CA
Hey @Naeblis can I just ask why make your own chassis rather than just getting one? I'd imagine it to be cheaper to buy a pre-built one.

It is an Intel Server Chassis R2208WTTYS. then i added 2 NVMe cages. the Icy Docks were only added to see if I could have 24 SSDs in the Tier 2 Storage.
 

Naeblis

Active Member
Oct 22, 2015
168
123
43
Folsom, CA
i wonder if the 4k writes are going to the "slower" sata ssd's? still they should be in the hundreds of megs not tens of megs/sec

but regardless, this is an epic build.
Thank you. There are no NVMe Drives in these tests. Trying to find the optimal config for the remaining slots. I want 24 SSDs if i can swing it.