Sorry if this derails the thread a bit!
My ScaleIO experience is limited since I'm only at 3 nodes, only with 1Gb Ethernet, and haven't yet tried separate storage-compute yet. That said, here are some of my thoughts:
-Installing is easy if you are going hyperconvered (use the vsphere plugin) or two layer (use the installation manager). Trying to mix the two during the initial setup is much more manual so I didn't attempt it.
-I've failed nodes and drives as tests. ScaleIO even in my setup is pretty quick to get you back to a safe state. With ScaleIO you are back to a protected state before you'd get to replace a dead drive in ZFS or normal raid setup.
-The GUI requires 64bit java and I had trouble getting it working on my desktop but it works perfectly on another computer. Once it is working, the GUI is very nice. Lots of data and control without getting confusing.
-One missing feature that seems odd is I don't see any way for it to email alerts in case of a failure. I only see SNMP. I might be missing something here.
-You will need extra storage in one node to initially run vcenter and the webgui VM since using the plugin for the initial setup makes things easy. Otherwise 16GB or 32GB SATADOMs are all you need. The plugin isn't super user friendly but after the setup you don't really use it much.
-I chose ScaleIO for the flexibility it offered. Nearly any computer can server storage, access storage, or both. Add or remove hard drives or servers at will. My experience here has been awesome although rebalancing data to a new drive is a little slow due to my setup.
My setup:
3 nodes (going to 4th in 2 weeks) with Xeon E3-1230 CPUs, 32GB ram, 16GB SATADOM and two 3TB 5400 rpm sata drives. Running ESXi 6.0U1 and ScaleIO2.0. Each node is using just a single 1Gb connection with everything on one subnet. My LB6M just arrived so I'll be going to 10Gb this week. I'm waiting till black friday to get some SSDs. Basically my cluster is the absolute smallest and slowest you could build.
Performance:
Moving large amounts of data is about 50MB/s and running multiple VMs at once will result in a lot of random IO instead of sequential IO. Random IO to 5400rpm drives over 1Gb isn't ideal. For what I'm doing (bulk media storage plus some VMs) the performance is perfectly fine. If you have a more serious load then 10Gb and SSDs I believe would perform great even at small cluster sizes. I'm moving to 4 nodes since I'll have the hardware and usable space will increase from 1/3rd to 3/8ths of raw. I'll start my own thread with performance notes after I see what changes 10Gb, SSDs, and a 4th node give you. Screen shot from my file server VM running on the ScaleIO storage.