Upgrading 96TB Colocation. Need some ideas for making higher IOPS

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

LeeMan

New Member
Oct 18, 2015
23
3
3
31
I recently built my 2U Supermicro server around Oct of last year. Running great but noticing that I have been having some IOwait time in the logs. We already are planning on upgrading the 2U supermicro chassis to a 4U 36bay SAS2 chassis but at the same time would like to do some work on lower our IOwait times with some large files we have.

We are a very small file hosting business (basically just me running it) for very large 2GB-80GB video files. So looking to do some sort of caching solution as we will be running 16 8TB drives at the end of Feb. All drives are 8TB WD Red drives running at 5200rpm. OS is ran off of 2 240GB Micron Enterprise SSDs in RAID1. We will also be upgrading our current OS drive configuration to 2 400GB Intel DC S37190 series SSDs. So we have some more breathing room with the current file loadout with metadata we currently store on the SSD.

We were thinking about building a cache with a Flash Accelerator F40 but was curious if there would be some faster options for us as this is my first adventure in creating a cache for our Ubuntu 14.04 system. I've even thought about using some 400GB Intel PCIe cards for the cache too but our budget is some limited (we would allocate around $400-500 for the cache upgrade).

Current configuration as follows for our server before any upgrade:

RAID1 240Gb SSDs for Ubuntu 14.04 LTS
x12 8TB WD RED drives w/ SnapRAID
64Gb DDR4 2400MHZ 1.2V ECC
Intel Xeon E5-2658 V3 (12 core/24 thread)
Supermicro SC826E16 Chassis
LSI 9211-8i HBA
1Gbps network


Just curious if there would be any other opinions as we are completely open to any route that would be beneficial!