I am building a home server to take on multiple tasks for my home network: network router/security, media storage (both archival and real-time DVR), system backups for about 15 systems, a number of VMs for various household services, and some work-related workspaces. Performance is important, but I believe the gear I've selected is up to the task. Reliability and redundancy are also important, but my needs aren't mission critical. I'm often away for a few days at a time traveling for work, and my wife still needs this stuff operational since its integrated into my house. Downtime would really suck, but I don't need HA redundant pairs of each node with cold spares. Power and noise are medium priority issues.. I think I know what I'm getting on that front.
6 Months ago when I started reaching the limit of my current setup, I did the research and figured I'd just go with and AiO ESXI powered by a 1240v2 Ivy Bridge Xeon. I never loved the idea, and just started deleting stuff off my file server to put off the need for the new server. I was hoping something better would come along, and I think the C6100 may be just that. Plus, I've run out of things to delete.. I really need to get a new server up and running.
Here is a rough outline of what I've got planned:
C6100 with 4 nodes, 2x L5520, 24GB per node
- Node 1: Networking - pfSense - Embedded install, boot from USB/DOM
- Node 2: Storage - OmniOS w/ Napp-it; Boot Drives: 2x40gb Raid 1 Intel 320 SSD
Networking gear:
- HP ProCurve 1810 or 1910
- Possibly a Ubiquiti EdgeMAX hardware router in lieu of pfSense
1) Will an embedded pfSense install work on a node?
2) Any concerns about the storage layout? Can I wire up the drives so that they all go to Node 2? Any significant benefit to running the VM OS Drive Pool directly on Node 3, as sort of an AiO zfs/VM host node?
3) Any other suggestions on the OS drive for OmniOS? 2x40gb Intel 320 SSD seems good to to me, but I'm open to other ideas. Mirroring here is smart because if the filer is down, everything is down.
4) What are my expandability options for storage? How would I go about adding another 8 drives for another vdevl? Can I design the system better now for future expansion? I know I will use up all of this space as some point, so I want to consider future growth in the build.
5) What is the natural bottleneck here given my use and design? If I could do something better, what is that? What's the best bang for the buck upgrade or change? What's the best way to add more reliability or redundancy?
- I'm considering upgrading the RAM.. any advantage to doing it now, or can I wait on that?
- Would 10gb be useful (Infiniband)? I feel like maybe the throughput and IOPS will be just fine with the 1gbe links. If I went with additional NICs in each node, is there any benefit to Link Aggregation?
6) What else? Am I missing anything? What would you do differently?
6 Months ago when I started reaching the limit of my current setup, I did the research and figured I'd just go with and AiO ESXI powered by a 1240v2 Ivy Bridge Xeon. I never loved the idea, and just started deleting stuff off my file server to put off the need for the new server. I was hoping something better would come along, and I think the C6100 may be just that. Plus, I've run out of things to delete.. I really need to get a new server up and running.
Here is a rough outline of what I've got planned:
C6100 with 4 nodes, 2x L5520, 24GB per node
- Node 1: Networking - pfSense - Embedded install, boot from USB/DOM
- Node 2: Storage - OmniOS w/ Napp-it; Boot Drives: 2x40gb Raid 1 Intel 320 SSD
- ZFS Data Store pool: 8x3TB RaidZ2 = 18TB effective (considering WD Reds)
- ZFS VM OS Drive pool: 2x500GB Raid1 = 500GB effective (considering Samsung 840 Pro)
- Linux + Windows workspaces
- Web Server
- Plex/media/entertainment
- Home Automation
- Telephony
- Misc Services
- Misc VMs for staging
- Test new hypervisors
Networking gear:
- HP ProCurve 1810 or 1910
- Possibly a Ubiquiti EdgeMAX hardware router in lieu of pfSense
1) Will an embedded pfSense install work on a node?
2) Any concerns about the storage layout? Can I wire up the drives so that they all go to Node 2? Any significant benefit to running the VM OS Drive Pool directly on Node 3, as sort of an AiO zfs/VM host node?
3) Any other suggestions on the OS drive for OmniOS? 2x40gb Intel 320 SSD seems good to to me, but I'm open to other ideas. Mirroring here is smart because if the filer is down, everything is down.
4) What are my expandability options for storage? How would I go about adding another 8 drives for another vdevl? Can I design the system better now for future expansion? I know I will use up all of this space as some point, so I want to consider future growth in the build.
5) What is the natural bottleneck here given my use and design? If I could do something better, what is that? What's the best bang for the buck upgrade or change? What's the best way to add more reliability or redundancy?
- I'm considering upgrading the RAM.. any advantage to doing it now, or can I wait on that?
- Would 10gb be useful (Infiniband)? I feel like maybe the throughput and IOPS will be just fine with the 1gbe links. If I went with additional NICs in each node, is there any benefit to Link Aggregation?
6) What else? Am I missing anything? What would you do differently?
Last edited: