I've been a hardware RAID guy for as long as I've been a RAID guy of any kind. But now...I'm going to try to make the leap towards ZFS and FreeNAS. My goals are two fold. First and foremost, I need a network-based storage system to test things like iSCSI over 10GB. Second, I need to also replace my aging back-up RAID array. I also have a third goal of setting up a network-based database for my ESXi hosts, but that's down the road.
So I have broken this into three phases. Phase one would give me the new physical server necessary to run FreeNAS along with a single NVMe drive to use as a simulated super-fast iSCSI device for my bench-marking. Here is my preliminary plan and cost estimate to present to my finance committee (AKA my wife...) for approval once I have all of the costs finalized. So here it is...
The goal is to connect all four servers (my existing lab of three server and my new FreeNAS server) directly to my Dell X1052. I would then also connect each server directly to the FreeNAS box for a dedicated iSCSI connection. I've chosen the S2600CP as it is readily available at Natex and other dual 2011 boards seem to be difficult to find. I've gone with 128GB of RAM as 256GB would nearly double the cost of the hardware for the foundation of the box. I have chose a pair of DA2's to provide the four ports necessary for my connection strategy. But wait...I have questions.
I've done a lot of research on the S2600CP and have two concerns/questions. First, will it fit in an SC846 chassis. I have an SC846 today and I am running an EEB motherboard in that server just fine. So I think the answer is yes, but I've heard of people drilling and tapping cases, and I'd rather not have to do all that.
Second, the PCIe slots on the S2600CP seem fishy. If I understand correctly, you can only run certain cards in 3.0 mode as of the newest bios. So I might have to stay on an old bios and use only certain slots? This really only matters for my NVMe devices. So it might not be an issue, but IOPS are key to my bench-marking, so this is a big one.
If you've made it this far...thanks for reading...now on to phase two, replacing my aging RAID array. I've already had to replace a pair of drives over the last 18 months, so I can imagine it isn't going to get better. I might re-purpose the array when I'm done with the FreeNAS build to be a secondary back-up for key files, but that's about it. Here's the plan:
I have less concerns here. I plan on picking up three LSI 9210-8i cards (no expanders!) and hooking those into my 846TQ with some long mini-SAS cables. Then I'll purchase 10 5TB drives. 6-10 is what I have heard makes the most sense for a RAIDZ array, so I maxed it out. So I would have 50TB of RAW storage in a RAIDZ2, giving me an effective 40TB of storage. This would roughly double my current RAW of 24TB and effective 20TB of storage. My only question here is...am I missing anything? Is there a better option for 5TB drives than these? They seem to be the best deal for a fully warrantied drive.
Finally...we have Phase Three. Basically this is building a new data store for my ESXi servers to share. I don't care about HA, I just want to try something other than all of my direct attached SSDs. Here's the plan:
I'm basically getting some of the cheaper Ultrastar's out there and throwing a P3700 in there to be both the L2ARC and the ZIL. The drive should be plenty big for both and have tons of endurance for my purposes. There are two alternate plans to this. Alternate plan one would replace the HGST drives with another set of 5TB Toshiba drives...but man that drives cost up. Alternate plan two would replace the P3700 with two different drives. My goal was to be able to max out my 10GB connection, so I would be open to suggestions for what to replace the P3700 with. I wanted to use a P3600...but the writes on that are kind of slow. They are very fast on the P3700.
Ok...if you made it this far, I probably owe you some sort of consulting fee. Thanks for reading and any input/guidance/wisdom/knowledge you can all provide!
So I have broken this into three phases. Phase one would give me the new physical server necessary to run FreeNAS along with a single NVMe drive to use as a simulated super-fast iSCSI device for my bench-marking. Here is my preliminary plan and cost estimate to present to my finance committee (AKA my wife...) for approval once I have all of the costs finalized. So here it is...
The goal is to connect all four servers (my existing lab of three server and my new FreeNAS server) directly to my Dell X1052. I would then also connect each server directly to the FreeNAS box for a dedicated iSCSI connection. I've chosen the S2600CP as it is readily available at Natex and other dual 2011 boards seem to be difficult to find. I've gone with 128GB of RAM as 256GB would nearly double the cost of the hardware for the foundation of the box. I have chose a pair of DA2's to provide the four ports necessary for my connection strategy. But wait...I have questions.
I've done a lot of research on the S2600CP and have two concerns/questions. First, will it fit in an SC846 chassis. I have an SC846 today and I am running an EEB motherboard in that server just fine. So I think the answer is yes, but I've heard of people drilling and tapping cases, and I'd rather not have to do all that.
Second, the PCIe slots on the S2600CP seem fishy. If I understand correctly, you can only run certain cards in 3.0 mode as of the newest bios. So I might have to stay on an old bios and use only certain slots? This really only matters for my NVMe devices. So it might not be an issue, but IOPS are key to my bench-marking, so this is a big one.
If you've made it this far...thanks for reading...now on to phase two, replacing my aging RAID array. I've already had to replace a pair of drives over the last 18 months, so I can imagine it isn't going to get better. I might re-purpose the array when I'm done with the FreeNAS build to be a secondary back-up for key files, but that's about it. Here's the plan:
I have less concerns here. I plan on picking up three LSI 9210-8i cards (no expanders!) and hooking those into my 846TQ with some long mini-SAS cables. Then I'll purchase 10 5TB drives. 6-10 is what I have heard makes the most sense for a RAIDZ array, so I maxed it out. So I would have 50TB of RAW storage in a RAIDZ2, giving me an effective 40TB of storage. This would roughly double my current RAW of 24TB and effective 20TB of storage. My only question here is...am I missing anything? Is there a better option for 5TB drives than these? They seem to be the best deal for a fully warrantied drive.
Finally...we have Phase Three. Basically this is building a new data store for my ESXi servers to share. I don't care about HA, I just want to try something other than all of my direct attached SSDs. Here's the plan:
I'm basically getting some of the cheaper Ultrastar's out there and throwing a P3700 in there to be both the L2ARC and the ZIL. The drive should be plenty big for both and have tons of endurance for my purposes. There are two alternate plans to this. Alternate plan one would replace the HGST drives with another set of 5TB Toshiba drives...but man that drives cost up. Alternate plan two would replace the P3700 with two different drives. My goal was to be able to max out my 10GB connection, so I would be open to suggestions for what to replace the P3700 with. I wanted to use a P3600...but the writes on that are kind of slow. They are very fast on the P3700.
Ok...if you made it this far, I probably owe you some sort of consulting fee. Thanks for reading and any input/guidance/wisdom/knowledge you can all provide!