Well by my Phase 3, reliability and not interrupting work will be paramount. Though yes it is true, I am looking for alternatives which accomplish the same goal! That's why I want to work out everything on paper including future scale out plans, so that I dont find myself painted into a corner, realizing my choices are preventing me from taking the next step and not knowing what to do now. (that was what I ended up with when I just kept storing data on external USB drives - only to find my data corrupt later from silent bit rot and having no idea what to do after the damage was done)SANs are designed first and foremost to be reliable, performance is second.
To be honest a SAN is not what you are asking for as they get exponentially more expensive as you grow larger and faster.
Both snapraid and zfs can scale as large as you want, the beauty of both is that they are very flexible about hardware upgrades. ZFS is a little more stringent on disk requirements with the parity. If you have large changes to files then you end up at ZFS as the most common choice for roll your own storage.
I like the fact they can both scale up - that means i'm not forced into a change, but rather can choose when to change. Like if the manual snapshot nature of SnapRAID starts to get annoying or feel burdensome, I might migrate it later to ZFS, only after I have a bunch of matched drives to set up as /vdev pools, striped and mirrored in the right way. SnapRAID is the cheapest growth system that I can upgrade bit by bit, after things are more established I might rush right away into a different system - or I might stick with it for years. But at least I wont be caught flatfooted in another data flood and we have more data coming in from film shoots than we know what to do with, it's just "buy drives and write tapes" on the minimum budget.
To the remote access scale out stuff, that was another reason I was curious about FibreChannel and Infiniband, especially with how cheap decade old generation stuff is. (the 4/8gig FC and the 10gig IB) I might not use them for the NAS Data Vault, but they might matter for a highest performance array due to lacking the overhead of TCP/IP and Samba or whatever other filesystem accesses may need to be done. I am not sure where the bottleneck is when doing massive operations on 8k video in Adobe CC - but usually it will be the scratch disk system. And sharing one such system with minimal latency may be more affordable than one per system at times, at least at first - like a single Intel 750 NVME shared to 3 workstations via IB. Ideal would be an Intel 750 per system, but the older IB gear is so cheap it doesn't cost much to let 2 other systems share the main systems SSD. I cant imagine 100gig IB ever being a bottleneck(!), what kind of systems does that occur on??I think its not so much that a SAN is required for the highest levels of performanc
The current trend seems to be more towards software-defined scale-out storage, possibly combined with a hypervisor to build a hyper-converged solution. Modern high-end storage is getting so fast that accessing it remotely, even over say a 100-gbit IB fabric, adds very significant latency - remote-access would eliminate the entire point of doing things like putting 3D-Xpoint storage onto the memory bus.
Also don't look at the prices of proper enterprise gear, and immediately eliminate them due to cost.
My recommendation to you, if you want to keep things cheap/control costs, is start building a hardware platform that you will be able to use for years to come, and that you can continue to add to without having to replace things.
Probably start out with something like a used supermicro SC836/846 chassis
Stay away from hardware raid cards that lock you into their specific feature-set - keep all the smarts of the system in software on top of generic hardware.
Being interested in SANs - if a cluster does the job better, that is fine too. I just want redundancy and high availability so that work isn't interrupted at some future point. (this is my 'Phase 3' upgrade after ample storage, and ample performance are attained) I read something about SAS dual porting - if it were something like literally power on either of two servers accessing the same drives i'd be totally content with that. Especially if the SAS expanders themself are easily swap out so that any point of failure has either a backup on the shelf or already set up if you get to work at 9am and there's a problem. If I can set this up with the same software and tools great, if I have to consider migrating to a totally different system - but good to know in advance whats on the list.
Concerning the chassis and stuff - one thing I like about how SnapRAID seems to work is even if I start with something as crappy as my older Core 2 Duo box with some pcie SATA adapters and say 11 drives in an ATX box - I should be able to buy that Supermicro chassis at any other time, swap all the drives over, and nothing is impacted. No data to migrate, just plug in the new LAN cable because the data is already on the drives afterall. I can't afford a 24 bay hotswap chassis tomorrow - I can just afford the drives our video data will be going onto for now. (esp after buying the $2000 Ultrium drive) But I can plan to upgrade to that or watch for deals in the future. That sounds like it's less straightforward on a FreeNAS ZFS system, and even if it isn't ZFS wont let me add and upgrade drives piecemeal to the existing storage pool. It was the frowning on my face I noticed when planning how I was going to have to buy matched sets of 8 drives at a time, migrate all the data over to a new server bought with 8 new drives then, just to properly upgrade space that made me look for alternatives to begin with.
Well I havent looked in years - my massive data loss happened like 5 years back, and nobody wanted to go past 32tb without telling me "hire a professional"/beyond DIY. And everyone was adamant 1TB HD - 1gig RAM was the accepted best practices guide as well. Since I couldn't even find anyone with a 64TB/64gig system to ask I already panned the plan at the time.What? I have 2x 86TB and 2x 116TB FreeNAS systems... I know I've seen a bunch of people talking about systems with much higher capacity than myself on the freenas forum.
My FreeNAS servers have 32GB of RAM and 64GB of RAM. Definitely not 1:1.
Also, it's common for people to add disks in pairs to ZFS and use mirroring for easy, affordable expansion.
Even if I were re-sold on FreeNAS I still have the issues of having to buy matching drivesets, migrate data into/out of /vdevs, no easy way for me to back up to tape from it, etc etc. I may look into it as a secondary high performance NAS option in the future though after the Data Vault is online and operative.
Last edited: