So I know I have another thread on planning this build, and now I have a new thread. But this thread will follow the actual build. The finance committee (my wife) has approved the new file server!
One more time...here's my existing lab:
The Lab
I'll start off with the purpose of my build. First, my current hardware RAID array is getting down to around 10% free space, so it is time for an expansion. I have 12 x 2TB drives in RAID 6 giving me around 18.7TB of formatted space. Second, and more importantly in the short term, I need a network storage solution to do some benchmarking.
My original choices included the following:
SuperChassis 846TQ-R900B
(2) E5-2670 @ 2.6 GHz
Intel S2600CP Motherboard
128GB RAM (16 x 8GB)
Noctua i4 Heatsinks
Intel AXXRMM4 Remote Management Module
(2) SanDisk Cruzer 16GB CZ33
Intel P3605 1.6TB
Twinax Cable
(2) Intel X520-DA2
(5) Noctua NF-R8
(3) LSI 9210-8i
(10) 5TB Toshiba X300
(6) Mini-SAS Breakout
CyberPower 1500VA UPS
The problem I encountered with this plan is that the more I read abou the S2600CP, the more I really have reservations. First, the PCIe slots are suspect depending on your bios version. I'd rather spend a little more and just have a board that works without issues. This takes me to my new option. Basically everything above with the following differences:
Supermicro X9DR3-LN4F+
192GB RAM (24 x 8GB)
(2) LSI 9210-8i
This option allows me to add 64GB of RAM and saves a PCIe slot by using the on-board C606 SAS. But...ill the C606 work well with FreeNAS and will it play nice with the 5TB Toshiba drives. So if it will do both of those things, this will likely the direction I go. If either of those things are not true, my backup option would be this:
Supermicro X9DR7-LN4F-JBOD
128GB RAM (16 x 8GB)
(2) Supermicro AOC-2308-l8e
This gives me three 2308 controllers to work with, still saves a slot, but limits me to 128GB of RAM for now. I could eventually go with 256GB if I wanted to spend an additional $800, but for now, I think 128GB should be plenty for now.
So a few more concerns or questions. For this pass, I plan to first set up a single drive vdev and zpool for the purposes of benchmarking. Basically I'm going to take my P3605 and set it up to max out my 10GB network. I'll have four 10GB links to work with. The first one will be connected to my X1052 switch to connect it to the network for general file sharing. The other three links will be used with Twinax DAC cables directly to my other three lab servers. So I will dedicate a 10GB link for each server for iSCSI.
My ten (10) 5TB Toshiba drives will be set up in a single vdev in RAIDz2 in a single zpool. This will facilitate back-ups and any media or file sharing I wish to have. I will also reconfigure my existing 20TB hardware RAID array to backup the most important things from my 40TB array. So basically half of my back-ups will have another set of backups in a separate physical server. Right now, that consists mostly of Veeam backup files of all of my various virtual machines. I keep two weeks of daily backups of most.
Finally, I'll be setting up another vdev and zpool that will be stripped only to provide data stores for my three ESXi hosts. I have SSD's in all of those servers, so this will be a test to see how I like the performance of a network-based data store. Initially I thought about setting up an array of eight (8) 2TB HGST drives and put an L2ARC and SLOG in front of it for performance. This would give me quite a bit of space to work with and hopefully good performance.
My other option would be to get eight (8) Intel S3700 400GB drives and put them into stipped vdev. This gives me 3.6TB of raw storage and most likely some great performance. I'll have backups of all of the VM's over to my 40TB array (and eventually my 20TB) array, so I'm okay with no redundancy. This is a test lab, so if that array goes down and has to be rebuilt and restored, so be it. Downtime won't be an issue. I won't have anything mission critical on this array.
My final option for this new data store array would be to get 8 or 10 Samsung SM843T 480GB drives and do the same stripped vdev. These are cheaper than the S3700 and provide more capacity. This gives me 3.8TB of raw storage. I assume with either of these options, I likely won't need an L2ARC or SLOG...I hope.
So....thoughts? Concerns? Suggestions?
I'm about to start ordering everything! I've already ordered by P3605!
One more time...here's my existing lab:
The Lab
I'll start off with the purpose of my build. First, my current hardware RAID array is getting down to around 10% free space, so it is time for an expansion. I have 12 x 2TB drives in RAID 6 giving me around 18.7TB of formatted space. Second, and more importantly in the short term, I need a network storage solution to do some benchmarking.
My original choices included the following:
SuperChassis 846TQ-R900B
(2) E5-2670 @ 2.6 GHz
Intel S2600CP Motherboard
128GB RAM (16 x 8GB)
Noctua i4 Heatsinks
Intel AXXRMM4 Remote Management Module
(2) SanDisk Cruzer 16GB CZ33
Intel P3605 1.6TB
Twinax Cable
(2) Intel X520-DA2
(5) Noctua NF-R8
(3) LSI 9210-8i
(10) 5TB Toshiba X300
(6) Mini-SAS Breakout
CyberPower 1500VA UPS
The problem I encountered with this plan is that the more I read abou the S2600CP, the more I really have reservations. First, the PCIe slots are suspect depending on your bios version. I'd rather spend a little more and just have a board that works without issues. This takes me to my new option. Basically everything above with the following differences:
Supermicro X9DR3-LN4F+
192GB RAM (24 x 8GB)
(2) LSI 9210-8i
This option allows me to add 64GB of RAM and saves a PCIe slot by using the on-board C606 SAS. But...ill the C606 work well with FreeNAS and will it play nice with the 5TB Toshiba drives. So if it will do both of those things, this will likely the direction I go. If either of those things are not true, my backup option would be this:
Supermicro X9DR7-LN4F-JBOD
128GB RAM (16 x 8GB)
(2) Supermicro AOC-2308-l8e
This gives me three 2308 controllers to work with, still saves a slot, but limits me to 128GB of RAM for now. I could eventually go with 256GB if I wanted to spend an additional $800, but for now, I think 128GB should be plenty for now.
So a few more concerns or questions. For this pass, I plan to first set up a single drive vdev and zpool for the purposes of benchmarking. Basically I'm going to take my P3605 and set it up to max out my 10GB network. I'll have four 10GB links to work with. The first one will be connected to my X1052 switch to connect it to the network for general file sharing. The other three links will be used with Twinax DAC cables directly to my other three lab servers. So I will dedicate a 10GB link for each server for iSCSI.
My ten (10) 5TB Toshiba drives will be set up in a single vdev in RAIDz2 in a single zpool. This will facilitate back-ups and any media or file sharing I wish to have. I will also reconfigure my existing 20TB hardware RAID array to backup the most important things from my 40TB array. So basically half of my back-ups will have another set of backups in a separate physical server. Right now, that consists mostly of Veeam backup files of all of my various virtual machines. I keep two weeks of daily backups of most.
Finally, I'll be setting up another vdev and zpool that will be stripped only to provide data stores for my three ESXi hosts. I have SSD's in all of those servers, so this will be a test to see how I like the performance of a network-based data store. Initially I thought about setting up an array of eight (8) 2TB HGST drives and put an L2ARC and SLOG in front of it for performance. This would give me quite a bit of space to work with and hopefully good performance.
My other option would be to get eight (8) Intel S3700 400GB drives and put them into stipped vdev. This gives me 3.6TB of raw storage and most likely some great performance. I'll have backups of all of the VM's over to my 40TB array (and eventually my 20TB) array, so I'm okay with no redundancy. This is a test lab, so if that array goes down and has to be rebuilt and restored, so be it. Downtime won't be an issue. I won't have anything mission critical on this array.
My final option for this new data store array would be to get 8 or 10 Samsung SM843T 480GB drives and do the same stripped vdev. These are cheaper than the S3700 and provide more capacity. This gives me 3.8TB of raw storage. I assume with either of these options, I likely won't need an L2ARC or SLOG...I hope.
So....thoughts? Concerns? Suggestions?
I'm about to start ordering everything! I've already ordered by P3605!