Some updates
Started with an Intel R2208WTTYS purchased from my favorite eBay seller kalleyomalley.
Added 2 4 NVMe Drive cages A2U44X25NVMEDK, had to remove the DVD/USB/Power Connector in the 3rd slot. That required I get an A2USTOPANEL to power on my server. The server chassis does not have the standoffs for mounting the kit, so I used some double sided tape.
Originally I wanted 24 SSDs however the throughput was not there. You can get 24 SSDs in the 2u chassis by adding 2 of ICY DOCK Tough Armor MB998SP-B 8x cages. They are a very tight fit and take some convincing to get in there. With an RMS3JC080 as JBOD for 8 drives and the Adaptec RAID 71605 Storage Controller 2274400-R, for the other 16 I had 24 SSDs and 24 Ports.
24 disks gave me results that were not much better than the 12 disk JBOD (3 in each of the 4 slot backplanes). So I isolated the 8 disks from the RMS3JC080 and the 16 disks from the Adaptec. Just for kicks I did a software raid 0 of the two arrays with dismal results. The results from 24 disks and two Adaptec controllers was fine. However in this set up I don’t have available PCI slots. After the 2 riser replacements there are only 3 PCIe 8x V3 and one PCIe 4x v2. Both the mezzanine slots on the motherboard are used (JBOD raid and Ethernet)
To connect the 4 cages to the 4 ports (2 from the RMS3JC080 and 2 from the motherboard) I needed two more cables. I have had bad luck with cables on ebay so I got an AXXCBL730HDHD. I replaced the single 1100w power supply with 2 FXX750PCRPS, it was cheaper to get 2 of the 750s than an extra 1100w. The Ethernet I/O module slot will have an AXX2FDRIBIOM used as a 40GBe rather than IB
When using the Icy Docks I had to be more creative and used 4 Adaptec Cables (2280000-R). Looks really messy when I had them outside. When inside there were silver cables running everywhere and it looked like spaghetti
Yes I am using Samsung EVO (120) as my boot- no flames please. I did not see the “Your next boot drive” post in the deals section till after I had purchased them.
As for CPUs currently I have 2 E5-2620 v3. Depending on testing results I might change them with 2 e5-2673 V3.
For memory I am using 2x Hynix 32GB (HMA84GL7MMR4N-TF).
For the NVMe SSDs there will be 8 x 2.5” Intel 750s (SSDPE2MW012T4R5) and 2 PCI versions of the same drive (SSDPEDMW012T401). While typing this it just occurred to me that instead of 2 Intel 750 PCI versions, I could use 2 Supermicro AOC-SLG3-2E4R cards and get 4 more 2.5” drives in the last two PCIe slots. That would be a total of 12 NVMe Drives. You could mount them upside down on the fan shroud.
Parts list
· Server - R2208WTTYS
· 2 NVMe Drive Bays - A2U44X25NVMEDK
· Power Button - A2USTOPANEL
· JBOD Raid Module - RMS3JC080
· Mini SAS HD cables - AXXCBL730HDHD
· 64 GB Memory - 2 HMA84GL7MMR4N-TF
· 8x 1.2TB 2.5" NVMe Drives - SSDPE2MW012T4R5 (6 in route)
· 2x 1.2TB PCIe NVMe Drives - SSDPEDMW012T401
· 2 E5-2620 v3
· 2 boot drives
· Power Supply - 2 FXX750PCRPS
· Dual port 40GB Ethernet AXX2FDRIBIOM
· Dual port 40GBe Ethernet MCX354A-FCBT
· Tier 2 SSD Storage 16 500GB Samsung
investment +- $13,500
Tier 1 Storage NVMe 5.5TB(r10), Tier 2 Storage SSD 3.8TB
Theoretical performance 26GBs and 2.3 Million IOPS
Network 4x 40G Ports (not tested) using SMB and RDMA
Max Config for
@dba
If used locally as a SQL server you could take out the MCX354A-FCBT and put in 1 more AOC-SLG3-2E4R for a total of 14 Drives and a theoretical 36GBs & 3million IOPS in a R10 or 6 million IOPS if used as a R0 all in a 2U enclosure.
investment +- $24,700
Tier 1 Storage NVMe 7.7TB (r10), Tier 2 Storage SSD 15.1TB
network limited to 2 40G (56IB) ports, 2 x 10gbe (build in x540) and 4 x 1gbe (pcie x4 v2) because of PCIe Slots.
@Patrick once I get the rest of my drives do you have 3 AOC-SLG3-2E4R cards and 6 other drives so we could try to set a world record for a 2U enclosure? I live in Folsom and could come down to the bay area. I believe that is where you are located.
Well anyways that is all I have for now.