OK, let me share with you guys the same thing I put up on the Azure Stack HCI Slack group that I just joined:
Here is my current hardware list that this references:
list of hardware 3-26-2020.pdf
My original idea was a primary server and a secondary server to back-up the first, to setup the “3-2-1 rule” for backing up data, with the third copy probably being in the cloud like Backblaze or something or maybe at my parents house. However, as I gathered hardware on Amazon, e-bay, and the junk pile at work, I realized that I could possibly do more with it. I work as a server system engineer running a datacenter, mostly VMware but I have extensive background in Windows and Hyper-V and networking as well.I have been doing allot of research into ReFS and ZFS file systems for high-reliability data storage, so I started researching/playing with Storage Spaces.
I started getting frustrated because much of the documentation talks about S2D and it is becoming hard to figure out which system can do what I want. I do not necessarily need a S2D cluster in my home, but I am interested in HCI, replication, de-duplication/compression, and related technologies. It also looks like I might possibly be able to use storage replica to copy the data to the second server (or 3rd?), I am wondering if it can do so without re-hydrating, that would be nice, I could also potentially play with RDMA/iWARP.
I am willing to set up S2D if that is what I have to do and if I cannot get what I need from regular SS, but I do not want to use too much electricity and at some point, my wife will get irritated with what all this costs. As for data, I have around 20TB of family photos, videos, BD/DVD ISO’s, and such. Going to get more into smart home, Plex, and all that sort of stuff over time, so it will be nice to have a small infrastructure of servers to support it. So this is not technically a homelab per-se, but I could certainly use a portion of it for that.This is where I currently stand:
- Researching and gathering all the needed hardware (see attached list above)
- Researching the commands and process for creating a mirror accelerated parity array
- Researching the associated hardware needed to support this configuration properly, as if it were production
- Use the 7x 10TB drives for dual-parity and some combination of the SSDs for cache
- Looks like there are allot of rules for the number and size of cache drives vs. data drives
- This is especially where I run into SS vs. S2D issues
- I am starting to make a list and compile all the requirements
- Finding all the little gotchas that I could encounter and trying to solve them
- Best way to replicate, copy, or backup the data from primary server to the secondary
- Currently looking into what it would take to actually setup proper S2D
- As you can see from my hardware list, I got lucky and ended up with a fair bit of some good stuff
- Researching the network cards and switches needed for RDMA/RoCE/iWARP, have not bought anything yet
- See if it would be possible to just do two nodes with x-over cables and not need a switch
I am actually going to go ahead and buy a 2nd X10SRH-CF motherboard since I already have a CPU and RAM for it, still debating if I will buy a 2nd 16-slot Supermicro chassis, maybe. Pretty sure these support the SES-2 enclosure management / awareness that S2D likes to see.
-JCL