I've been playing with various ways to bench NAS performance as seen from a Windows 8 based client. After googling for quite a while I've come to the conclusion that there isn't any good tool for this and no generally accepted method...sucks.
I did decide not to tear down the Server 2012 R2 system to load napp-it/ZFS. Just too much hassle to put it back. I do have two other high quality ZFS servers on my 10Gbe LAN to test against.
Server 1:
- Solaris based ZFS file server
- Bare metal load of Solaris 11.1 and Napp-it.
- Xeon X3460 2.8Ghz, 32GB 1333 ECC
- 20x Hitachi 2TB 5900 RPM (coolspin) set up as a single pool with 2x 10-drive RaidZ2 vdevs
- Drives connected by 3 separate M1015s flashed to IT mode (no bottlenecks)
- 2x 20Gbe links active on Intel 82599-based NIC
Server 2:
- ZFS on Linux running on Proxmox 3.0 (Debian Wheezy)
- Dual Xeon E5-2667 on SM X9DRL-3f MB, 128GB 1666 ECC
- 8x Seagate 4TB 5900 RPM drives in a single RaidZ2
- Drives connected by on-board SAS ports (no bottlenecks)
- 2x 20Gbe on Mellanox ConnectX-3 EN
The pools on both systems bench in Bonnie above 1,000MB/s seq read and above 500MB/s seq write. They should provide a good point of comparison.
So...I built an RAMdisk on the client machine. I've got a 4.68GB folder containing the ripped files from a DVD (good mix of small and big files). I'll look at speeds copying this to/from each server. Since there are no reliable benches available this will have to do for now.
Here's a quick look at the RamDisk performance.
It ought to be fast enough to be a source or sink of file copies.
Here are copies to/from the Server 1 (Solaris ZFS, 20x Hitachi 2TB, local speeds >1,000MB/s read. >500MB/s write):
Copying from the RamDisk to the Solaris server showed a slow start, mostly around 200MB/s, and a speed burst at the end to >250MB/s
While copying the same directory back hit a nice solid 300MB/s for the while transfer
Here the same copies to/from the Server 2 (Debian Wheezy ZoL, 8x Seagate 2TB, local speeds >1,000MB/s read. >500MB/s write):
Copying from the RamDisk to the ZoL server showed a slow start, mostly around 200MB/s, and a speed burst at the end to >270MB/s
Looks like a carbon copy of the Solaris based server, just a bit smoother.
While copying the same directory back hit a nice solid 300MB/s for the whole transfer
This one is shockingly similar to the Solaris machine.
So how did the Server 2012 R2 machine do? Did SMB3 give it a huge advantage?
No - but it didn't do too badly either.
Here's copying from the Workstation Ramdisk to the Server 2012 R2 machine. Its interesting - with a long slow start period and then a ramp to over 400MB/s!
The return path (Server back to client) came in at a nice smooth 220-230MB/s.
Only about 70% of the speed of the Solaris servers.
Conclusion
So what's the bottom line? I'd say that Server 2012 R2 and its derivatives are finally showing some moxy for file services vs ZFS. Overall, the SS file system still struggles with software parity raids - but they've created a strong SSD-based journal/cache approach that can make it a usable filesystem, if still a bit slower on the local machine than a comparable ZFS build.
Also, SMB3 does bring some advantages. Neither the ZFS nor Server 2012 system could deliver anything close to saturating the 10Gbe links. This is highly disappointing. But the Server 2012 build did get a larger percentage of its filesystem performance onto the link - leaving the two options almost exactly the same performance when seen at the client.
Note that this was all done on a single-client workload. Not realistic in most cases. Most production NAS builds expect to serve dozens to hundreds of workstations. I don't have any way to predict how either build would work in that environment.
Next steps
Over the next several weeks I'll keep tuning and see if I can squeeze out just a bit more. For now - I think that's it.
I do still need to finish setting up a 2nd Server 2012 R2 machine and test with SMB Direct (RDMA over Ethernet). That part is going to have to wait a few weeks while I wait for a couple of parts.