Well I suppose you could consider all all storage to be "non-volatile memory", and sure, an externally attached array qualifies just as much as a local NVMe device.
And there are still plenty of reasons why centralized shared block storage (aka SAN) has value. There is still lots of R&D going into that area, though the back-end architecture's seem to be moving away from active-active dual-controller proprietary RAID systems towards more software-defined scale-out on generic hardware, its still a SAN.
Well for my particular situation i'm trying to figure out "when do I need to look at a SAN in the future server planning".
I know it sounds bad to say "i'm shopping for something that I dont even know if I need!" yet but that's because i'm being forced into the role by circumstance. I know what I need once I see it, the only question becomes when it becomes critical to implement.
Like right now, my assumptions are basically the following:
- I need to start with a NAS. I've chosen to use SnapRAID. That isn't perfectly suited to things like virtualization or being Adobe CC scratch drives, but it lets me get my feet wet, because i'm still going to need "big dumb cheap but RELIABLE bit buckets" that wont corrupt over the years.
- When I start processing video, a NVME SSD is going to be the next upgrade for applications.
- Yet at some point, if my workfiles are larger maybe than the 1.2TB Intel 750, or I need even more performance, I start looking at things like Infiniband QDR or FDR (32-40gig) running RAID stripes of SSD's on SAS Expanders running Openfiler as a custom SAN. Even though there is a bit more latency than NVME, there isn't much (I see Infiniband Openfiler numbers of 90-100k IOPS) my media processing workloads (if the cpu is bottlenecked by both speed and storage space) need raw bandwidth and size and that's probably the only way to get there.
I'm not sure what the virtualization sessions might need to run on (NAS or SAN or what) but the sequential media processing, if it needs more bandwidth and available space than NVME, might finally give me a use case justifying a custom SAN. Does that about seem right? It's about the application being run, and only a build that extreme would probably even outperform NVME (in total storage space and bandwidth, not necessarily latency) to begin with. Alternately a latency bottlenecked app which also wants more space will have to wait for larger NVME or even run more than one if you can.