ITRonin's FreeNAS lab server

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

itronin

Well-Known Member
Nov 24, 2018
1,237
797
113
Denver, Colorado
Build’s Name: freenas41.xxx.local
Operating System/ Storage Platform: Freenas 11.2-RELEASE
CPU: e3-1230
Motherboard: Tyan S5510GM3NR
Chassis: Corsair Carbide 300R 1st generation modified for 6 x 5.25” Bays
Drives:
Zpool Disk 3 x 4 Raid Z1 with log
8 x 600GB HGST Netapp X422 10K SAS HDD
4 x 600GB Toshiba AL13SEB600 10K SAS HDD
2 x 80GB intel DC3500 SATA SSD on 6Gbps SATA
Zpool SSD 6 x Mirror
12 x 400GB Hitachi HUSML4040ASS601 SAS SSD
RAM: 4x 8GB ecc udimm
Add-in Cards:
MCX354A-QCBT (649281-B21) flashed to a MCX354A-FCBT 40Gbe
Dell Perc H310 HBA flashed to LSI IT Mode
HP Sas Expander flashed to latest fw package
Power Supply: Corsair HX750
Other Bits:
6 x Icy Dock MB994SP-4S Hot Swap for 24 2.5” 15mm SAS hot swap bays
Corsair H60 AIO cooler
Corsair SP140 Blue Fans
Corsair SP120 Blue Fans
fan extension cable
6 x SFF8087-SATA cables
2 x SFF-8087 to SFF-8087 cables
5.25” 4 bay drive cage cut from 1999 ATX (V1) Lab system case
JB Weld Black to build the lower cross brace fascia
Incept Date: 11/2018 Updated 3/2019

Usage Profile: Provides 10Gbe NFS storage to my esxi systems via storage vlan. This system replaces a Synology DS1010+ with DX510 that was providing NFS via 1Gbe vlans.

Other information:
This system has non hot swap space for 8 x 2.5” devices in the remaining lower drive bay using dual drive adapter brackets.

I lifecycle large chunks of my lab every 5-8 years depending on cash flow and try and get the most bang for the buck. Only in the last 10 years was I able to start writing it off as before that I worked directly for corporations. First lab was 1999, then 2007, then 2012 and now 2018. I still have a chassis, motherboard, cpu, memory and some SCSI devices from 1999 though I am about to punt those to the recyclers if I can get some data off some old DAT tapes.

(Q) Why the old spinners?
(A) I have customers with 24 bay drive servers/shelves/NAS which are mostly spinners and will mimic scenarios for support or projects. My customers are mostly SMB and we try and get 7-8 years out of their server and network infrastructure.

(Q) Why did you build this monstrosity instead of buying a 24 Bay P2000 or an 8/9 5.25” Bay Mid-tower?
(A) Because I like mid-towers and got it into my head I could build this re-using a compute node I had on hand from the 2012 evolution. 4 x 300R’s fit in each bread rack level very well. Mid towers are a bit more manageable for me but I have to admit this thing weighs about what a 2U server weighs but it is a bit less cumbersome.

(Q) That is an awful lot of pricey Icy Dock kit. Okay not a question.
(A) Hot Swap lets me reconfigure very easily for testing things like vSAN or the drives themselves. I’ve been buying ICY Dock products for a long time. I cleaned out the SAS Hot Swap inventory from a business that seems to be dumping their ICY Dock Products and I got a few more vi eBay.

(Q) The fascia work is a bit messy. Again not a question.
(A) First time trying to mod fascia. First time working with epoxy. I’m not the most artistically/mechanically inclined person but what I build is usually pretty solid just not that pretty. My OCD kicks in when I look at the lower fascia brace and realize its pushed out on one side and makes the drive cages look like they aren’t level across the front.

(Q) Why the HP SAS expander instead of another HBA?
(A) Because I wanted to play with it, and the lower drive bay will accommodate 8 2.5” drives so I’m above 24 drives.

(Q) Why the LED fan lights?
(A) I like blinky lights. Makes me feel fuzzy. Practical reason is sometimes my neighbor has to be my remote hands when I am on the road and colored systems help them find things quickly. But really cause I like blinky lights. Compute Nodes have a Red fan in the lower fan position. Labels are on the back.

(Q) What is the power usage?
(A) No idea, have not measured but my electric bill did not change when I turned off the synology it replaced and turned this on.

(Q) How quiet is this system?
(A) its okay. Its about as quiet as a synology DS2411+ and DX1211 combined. It lives in the basement with the bodies and they don’t seem to mind. ;-).

Front.jpg Front bezel removed.jpg Side.jpg
 
Last edited:

PGlover

Active Member
Nov 8, 2014
499
64
28
57
Can you please explain in more detail what services you provide to SMB customers.
 

itronin

Well-Known Member
Nov 24, 2018
1,237
797
113
Denver, Colorado
hmm not to hijack my own thread or to turn into an ad - IT Ops and infrastructure. My SMB customers are either self supporting for end user applications or out source that support (T1 to some T2). Escalations to T2 & T3, network, infrastructure, cloud management, monitoring, roadmaps, growth, budgeting, and vendor management they hand to me which is either onsite (local or travel) or remote based. I'm also part of a small group of consultants with complimentary skills and we come together for larger enterprises performing evals, tiger team work, and some cases soup to nuts transformations or IT turn-arounds.
 

itronin

Well-Known Member
Nov 24, 2018
1,237
797
113
Denver, Colorado
How's performance? Ran some tests?
I have updated the original post with the current config.
Performance numbers are Meh - okayish.

slow progress on doing anything particularly functional (lack of time) but last weekend I did have a chance to do some preliminary testing using ATTO.

But no tuning or tweaking except for bumping the NFS server count to 6. No playing with iSCSI for now.

Overall things saturate at about 1.08GBps read/write largish blocks (pics attached) on the SSD pools regardless of Mirror or Raid Z1 vdevs.

Perf results lead me to believe I'm throttling somewhere. SLOG has no impact on the disk performance .
Sync is NOT disabled on datasets. Not close enough for wire speed for me to think its the MTU size.

More specifics:

Switch ICX6610-48P, vmware hosts 10Gbe tagged VLANs for vMotion and NFS, freenas 40Gbe tagged VLAN for NFS . Using standard 1500 MTU and probably won't change that as its not worth the hassle.

Freenas server stock config except NFS server count was changed from 4 to 6
Pools: SSD's tested in 6x mirror and 4x Raid Z1; Disk 4x Raid Z1
HP 40Gb IB xflash to Mellanox ConnectX3 40Gbe code. x8 PCIe 3.0 slot.
Single PERC H310 xflash to LSI 9211 P20. x8 3.0 PCIe slot. (2.0 PCIe device)
single HP expander, latest FW with 2 x 4 channels to the PERC
all devices show negotiated at 6Gbps

vmware hosts stock config
(vmware51) Tyan S5510GM3NR, E3-1230, 32GB 1333 udimm, Mellanox ConnectX3 10Gbe x8 PCIe 3.0 slot.
(vmware61) SM X9SRL-F, E5-2670v2, 128GB PC3L 1333, Mellanox ConnectX3 10Gbe x8 PCIe 3.0 slot

Test client:
Windows 10
120GB virtual disk thin provisioned, 4 CPU, 16GB

vm61-SSD41.jpg vm61-SSD41.jpg vm51-SSD41.jpg vm61-Disk41.jpg vm51-Disk41.jpg