I use x16 PCIe cards that house 4 Nvme drives each and I bifurcate several of my x16 PCIe slots on the board to make all the drives work. I then use Storage Spaces Direct which, after some performance optimizations, provides close to 92 Gbit data throughput over the network, nearly exhausting the 100Gbit bandwidth of the 100Gb Nics (Mellanox ConnectX-4). The actual throughput of the Nvme drives in Raid0 is even higher than 92Gbit but I guess this is the bandwidth at which the network throughput becomes the bottleneck.
Hope this helps, let me know if you need even more details...the performance optimizations were very tricky and definitely took the longest among the entire setup to fine tune. I gave up on drive/folder mapping and sharing of folders/drives because Windows imposes OS bottlenecks that I have never been able to figure out over the years. I would consider Linux as OS for the file server but I run several backup applications that constantly backup changes in the Raid0 drive to mirrored backup disks and also sync changes to the cloud and to my knowledge certain apps that I use for cloud sync are not available in Linux.
Nice.
How do you Raid0 the nvme drives? Any further details you can share re the setup?