VM shared storage is a fantastic solution and a lot of fun to build, and there is nothing at all wrong with using a c6100 node as a storage node. For a very low-budget, very low-complexity solution, there is another option: replication. It's simpler to build, cheaper for the same level of performance, and eliminates one possible point of failure.
The problem with shared storage for a low-budget VM build is that it requires some serious performance from the storage node and your network.
On the network side, a Gigabit connection between a VM and storage is already a bottleneck, and when you run several VMs over that gigabit pipe the bottleneck gets bad quickly, making things feel very slow. By the time you have more than a handful of VMs, you either have virtual servers that feel like virtual Pentium4 computers, or you have upgraded to 10Gbe or Infiniband networking.
On the storage side, trying to serve up enough throughput and IOPS to serve three c6100 nodes worth of VMs will drive you to more than one disk spindle per VM and a cached RAID card. You'll end up with plenty of disk space but not enough disk slots to achieve the IOPS you need.
Here is an alternative for the c6100: Simple direct-attached SSD non-RAID storage with backup and replication as the disaster recovery strategy. You jettison the complexity of a cluster setup, save money on RAID, and avoid storage server bottlenecks that are impossible to avoid on a tight budget. You'll probably end up with better performance than if you built a low-end shared storage solution.
Start with a single c6100 node. Configure it with dual L5520 CPUs and enough RAM to run your VMs. Add a single large SSD drive for VM storage and configure your backup software to take a daily snapshot-based backup of the entire server and its VMs. With the SSD, you'll have plenty of throughput and IOPS to run four to eight average VMs with good performance. If you need more space or have more VMs, add a second SSD drive and store half of the VMs on that drive.
Now spin up a second c6100 node as your replication destination. Use Hyper-V 2012 replication, VMWare replication, or Veeam to replicate your VMs to the second node for disaster recovery. The second node just needs large slow SATA storage drives, not SSDs.
If your VM count grows, you can start to configure the remaining two c6100 nodes and have these replicate to the replication node as well. Fully configured, your c6100 could easily handle 30VMs or more - mine does.
With two SSD drives in a single c6100 node, you'll have around 1GB/s of throughput and more than 80,000 IOPS, enough for ten VMs at least. With three c6100 nodes running, that's an aggregate 3GB/s and 240,000 IOPS over say 30 VMs. It's possible to get that level of performance from a c6100 storage node, but it would not be cheap, it would add complexity, and that storage node would still be a single point of failure.
I use Hyper-V 2012 replication for my c6100 "Corporation in a Box" machine. The VMs running on three of the c6100 nodes replicate to the fourth node every five minutes with an application-aware snapshot every four hours. I retain about five hours of five-minute replicas and 24 hours worth of the four-hour snapshots, plus a separate daily backup. This lets me recover from a failure or human error by selecting a daily backup as far back as 60 days ago or selecting from any of the four-hour snaptshots today or any of the five-minute replicas from the prior four hours. At one time I replicated over Infiniband, but it was overkill, so I removed the cards and now just use LACP to combine the two Gigabit connections. Right now it's a plan old c6100 with two 512GB SSD drives and a 2TB backup drive in each of the three VM nodes and 2TB drives in the replica node. There is no RAID and I do not back up the replica node at all since I retain backups on each of the three VM nodes. It's a very simple architecture and it works very well.
Hey all, new to the forums. I just finished reading all 26 pages in this thread and I feel confident that this setup might be just what I am looking for. I am working for a private school with a tiny budget, but am looking into getting one of these to switch us over to an ESXi solution. I am looking at a few c6100s on Ebay and think I have an idea for the configuration we are looking for. I was just hoping to run a few things by you before I pull the trigger, regarding how these nodes will be set up. The way I see it I have a few options. The only thing I know for sure is that I will need 1TB 3.5" drives for storage, and 8 quad core CPUs.
Option 1: Use 1 node for storage, with 9 drives attached to that single node. (I see it is possible but not necessarily clean or easy) Then use 1 drive per node for the other 3 nodes, installing ESXi 5.1u1 on the 3 nodes. This will yield 4TB in raid 10 plus 1 hot spare, (correct me if I'm wrong), and 3 ESX hosts.
-Questions- Can I install FreeNAS on the storage node, and point ESX to it for storage? Or is there a better option?
Option 2: Use 2 nodes for storage, with 5 drives per node. (this can be done without any tricky wiring from what I've read) Then use 2 nodes as ESX hosts. This will yield 2x2TB=4TB of storage plus a hot spare per node in RAID 10.
-Questions- The same regarding FreeNAS, or is there a better way to use ALL of the storage for ESX?
One way I can save some money is by only getting 1TB drives for the ones that will be used for storage, so for option 1 it would be 9TB drives and 3 small drives for the ESX installation.
We are currently only running 3 servers, but we have had the need arise for a few more, and the ones we have are starting to show their age, and I would like to balance the workload by creating multiple server VMs so that we don't rely on each single server as heavily. Right now if one of our major servers goes down, EVERYTHING goes down (file shares, printing, DHCP, DNS, etc.)
Any advice would be great, our budget is only about $1,000, but I might be able to push it to $1,500. Thanks in advance, and sorry for the novel.