I've been playing around with building a caching cluster at work for our big customers, the ones who's GIS data takes well over a week to cache (one takes 65+ hours just for two levels). This is the start of the build log for the cluster I've speced out. I chose NAS4Free instead of FreeNAS or OmniOS + napp-it due to the fact that I have the most experience with it.
Storage Server
Coboc 7' CAT7 cables, orange because it isn't used in our server room yet
Current testing shows a 76% speed improvement going from a single server to four servers in my VM environment, and I expect it to be even larger with physical machines (storage for the VMs is a Lenovo Haswell i5 desktop w/ 8GB RAM, single 1Gbe NIC & 2 320GB Hitachi SATA drives in RAID 1 on Ubuntu).
I expect to be able to start ordering parts by the end of the month at the latest and will definitely enjoy posting build pictures.
Storage Server
- Supermicro 6047R-E1R24L
- 2x Intel Xeon E5-2603 Sandy Bridge-EP
- 4x Crucial 16GB (2x8GB kit)
- 2x WB Blue 250GB HDD (mirrored for boot, probably OmniOS + napp-it now)
- 10x WD Black 2TB (5 stripped mirrors)
- Intel X540-T2
- 4x Icy Dock EZConvert MB882SP-1S-1B
- 4x Intel DC3500 300GB SSD (2 mirrored for ZIL & 2 stripped for L2ARC)
- Supermicro 5018D-MF
- Intel Xeon E3-1275V3 Haswell
- Crucial 16GB (2x8GB kit)
- WB Blue 250GB HDD (fast enough since not much data will need to be local, will be backed up to another storage array)
- Intel X540-T1
- Server 2012 R2 Standard
- ArcGIS for Server 10.2.2
Coboc 7' CAT7 cables, orange because it isn't used in our server room yet
Current testing shows a 76% speed improvement going from a single server to four servers in my VM environment, and I expect it to be even larger with physical machines (storage for the VMs is a Lenovo Haswell i5 desktop w/ 8GB RAM, single 1Gbe NIC & 2 320GB Hitachi SATA drives in RAID 1 on Ubuntu).
I expect to be able to start ordering parts by the end of the month at the latest and will definitely enjoy posting build pictures.
Last edited: