I appreciate all the ideas for brainstorming, it’s definitely got me researching new avenues and leading to some progress.Have you looked at spdk? (Storage Performance Development Kit)
https://dqtibwqq6s6ux.cloudfront.ne...e-reports/SPDK_nvmeof_perf_report_19.01.1.pdf
I want to try it by myself just for education purposes but have no time at the moment.
I don't know is it production ready but if you are experimenting you may try it and post results here.
Also for RAID for NVMe you can try Intel VROC. The key is inexpensive for you to try it I think: https://www.amazon.com/Intel-Compon...ds=vroc+intel&qid=1557198183&s=gateway&sr=8-3
https://www.intel.com/content/dam/w...fs/virtual-raid-on-cpu-vroc-product-brief.pdf
Again have no experience with it, just as brainstorm.
I have tried intel VROC with mixed thoughts. My biggest few issues there are:
-Creating the RAID in the pre-boot environment doesn’t actually create a raid volume that’s recognized by anything other than windows installer. Linux, MacOS, and Windows OS see it as separate drives after the volume is created, it requires an additional software defined raid to be created. Which, unless it’s for a Windows OS boot drive, there’s still OS managing the RAID. Which is ok, but what happened to me due to how consumer mobos bifurcate, the volume can be destroyed without notice. On x299 platform, I had a test raid volume
For both Mac and Win. It worked fine, then I installed another NIC, which reallocated the lanes allocated for VROC. If you boot with part of the volume missing, the array is degraded and there is no coming back. Happened when testing R0 and R1. If it was a raid 5, I assume if 2 Drives didn’t present themselves, even without issues, then the same would happen. It’s cool for testing or if your settings never change, but it’s just too risky. I wouldn’t mind the risk if I could rebuild a failed 4tb array over 100GbE. I could do that in no time, so losing an R0 which could be rebuilt in minutes, no issues for me. If I can’t get information to another machine that fast
-also the performance, compared to a single NvME for boot OS, I can’t say it makes a difference to me. I don’t find any constraints booting or operating on a sigle NvME. Just like my UnRAID VM’s which are on an NvME array that hits 25 gigabytes per second. Gigabytes, not gigabits. I’m like, so what, 95% of everything loads instantaneously with a single high performance NvME. The scale
Achieved is of MS or lower. Latency is already not detectable by human sensory features. Which is all that matters for my project. For a bank, it might show benefit.
-the last thing is along those lines. If the speed doesn’t matter to the OS, the only other use case for VROC would be for transferring files. And I could get some fantastic internal transfer rates, but it begs the question, what’s the use case of transferring faster if not outside the computer itself? I’d made sense when 8tb NvME’s weren’t on the horizon.
The only use case I have is the one at hand. Mainly concerned with read speed of the streaming content, and very little need to write that data to the client. If I could get 40GbE going, I can handle the slower write toke of a single NvME.
That’s my rationale on giving up on VROC