My First FreeNAS - Planning Phase

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

briandm81

Active Member
Aug 31, 2014
300
68
28
42
I've been a hardware RAID guy for as long as I've been a RAID guy of any kind. But now...I'm going to try to make the leap towards ZFS and FreeNAS. My goals are two fold. First and foremost, I need a network-based storage system to test things like iSCSI over 10GB. Second, I need to also replace my aging back-up RAID array. I also have a third goal of setting up a network-based database for my ESXi hosts, but that's down the road.

So I have broken this into three phases. Phase one would give me the new physical server necessary to run FreeNAS along with a single NVMe drive to use as a simulated super-fast iSCSI device for my bench-marking. Here is my preliminary plan and cost estimate to present to my finance committee (AKA my wife...) for approval once I have all of the costs finalized. So here it is...



The goal is to connect all four servers (my existing lab of three server and my new FreeNAS server) directly to my Dell X1052. I would then also connect each server directly to the FreeNAS box for a dedicated iSCSI connection. I've chosen the S2600CP as it is readily available at Natex and other dual 2011 boards seem to be difficult to find. I've gone with 128GB of RAM as 256GB would nearly double the cost of the hardware for the foundation of the box. I have chose a pair of DA2's to provide the four ports necessary for my connection strategy. But wait...I have questions.

I've done a lot of research on the S2600CP and have two concerns/questions. First, will it fit in an SC846 chassis. I have an SC846 today and I am running an EEB motherboard in that server just fine. So I think the answer is yes, but I've heard of people drilling and tapping cases, and I'd rather not have to do all that.

Second, the PCIe slots on the S2600CP seem fishy. If I understand correctly, you can only run certain cards in 3.0 mode as of the newest bios. So I might have to stay on an old bios and use only certain slots? This really only matters for my NVMe devices. So it might not be an issue, but IOPS are key to my bench-marking, so this is a big one.

If you've made it this far...thanks for reading...now on to phase two, replacing my aging RAID array. I've already had to replace a pair of drives over the last 18 months, so I can imagine it isn't going to get better. I might re-purpose the array when I'm done with the FreeNAS build to be a secondary back-up for key files, but that's about it. Here's the plan:



I have less concerns here. I plan on picking up three LSI 9210-8i cards (no expanders!) and hooking those into my 846TQ with some long mini-SAS cables. Then I'll purchase 10 5TB drives. 6-10 is what I have heard makes the most sense for a RAIDZ array, so I maxed it out. So I would have 50TB of RAW storage in a RAIDZ2, giving me an effective 40TB of storage. This would roughly double my current RAW of 24TB and effective 20TB of storage. My only question here is...am I missing anything? Is there a better option for 5TB drives than these? They seem to be the best deal for a fully warrantied drive.

Finally...we have Phase Three. Basically this is building a new data store for my ESXi servers to share. I don't care about HA, I just want to try something other than all of my direct attached SSDs. Here's the plan:



I'm basically getting some of the cheaper Ultrastar's out there and throwing a P3700 in there to be both the L2ARC and the ZIL. The drive should be plenty big for both and have tons of endurance for my purposes. There are two alternate plans to this. Alternate plan one would replace the HGST drives with another set of 5TB Toshiba drives...but man that drives cost up. Alternate plan two would replace the P3700 with two different drives. My goal was to be able to max out my 10GB connection, so I would be open to suggestions for what to replace the P3700 with. I wanted to use a P3600...but the writes on that are kind of slow. They are very fast on the P3700.

Ok...if you made it this far, I probably owe you some sort of consulting fee. Thanks for reading and any input/guidance/wisdom/knowledge you can all provide!
 
  • Like
Reactions: cperalt1

briandm81

Active Member
Aug 31, 2014
300
68
28
42
Oh and one final option would be to take the 2630V2's I have from my ESXi and replace them with 2670's like my other two ESXi boxes and use the V2's in the FreeNAS box. Not sure if it would be worth the trouble...but a thought.
 

ttabbal

Active Member
Mar 10, 2016
747
207
43
47
Maxing out 10gbe... I don't really play at that level, but a few suggestions come to mind...

Don't share L2ARC and SLOG. Use a smaller, overprovisioned SSD for the SLOG. You will bottleneck here if you try to share it.

For max IOPS, I would go all-in on SSD and drop the spinners entirely. Mirrored SSDs, at least 2vdevs in the pool. If you don't mind some possible downtime, you could use a big non-redundant stripe across a few fast SSDs. Then replicate that over to the slower spinners however often you like as a backup. It gets expensive fast, but you said you wanted 1GB/sec throughput on random I/O.

I don't see any problem with the backup array being 10x5TB raidz2. Much bigger in either disk size or number of disks and I would consider raidz3 or splitting into multiple vdevs. I use 2 6x2TB raidz2 for a similar purpose and I'm happy with it.
 

briandm81

Active Member
Aug 31, 2014
300
68
28
42
That's an interesting thought. I have four 250GB drives in hardware RAID 0 right now, and my single Intel 750 does laps around them for most things. Perhaps I need a pair of 1.6TB P3600's stripped? 3.2 TB of space...super, super fast? Hmm...cost wise it isn't much different. I get less space, but I don't really need all that much space for this purpose anyway. I'll definitely consider something like that for Phase 3. I have plenty of time, that won't happen until Christmas at the earliest.