Radian,
So your VMs are using non-SSD storage? How many VMs are you running and how many disks are in that RAID6 array?
RAID6 is nice for data protection of course, but it's terrible for IOPS and I wonder if that isn't holding you back. 10TB of RAID6 is most likely six 2TB drives or twelve 1TB drives. With 12 drives and RAID6, you'll have just 200-400 write IOPS at the drive level - half that if you are using six drives. Your LSI card has some cache so if you measured IOPS it would be higher than that, but not dramatically since your data will be spread across a far larger area than your 512MB of cache can hold. That's not enough for one busy VM, much less several.
If you can, give this a try: Temporarily migrate one or two of your VMs to a local SSD drive. If you can do so easily, add a few local SSD drives and then move all of your VMs. I suspect that you'll immediately start thinking "I need better storage" instead of "I need more CPU".
If that ends up being the case, we can talk about long-term storage options for VMs.
My first-hand experience: I have an HP MSA2000 G2 SAN. When I first acquired it, I thought it might make a good storage location for a VM cluster. I knew that RAID6 has a serious IOPS penalty, but I wondered if the 4GB worth of cache on the controllers would more than make up for it.
I created a 10TB RAID6 array out of twelve 1TB disks. Using IOMeter, I measured - recalling from memory - around 9,000 IOPS, which means that the cache was working very well. I then connected the array to a host via SAS (1,000MB/S max throughput) and migrated several VMs to the array. Performance was disappointing - the VMs felt very sluggish. When I used IOMeter to measure IOPS inside of each VM with all VMs running at the same time, the numbers were awful. In the end, I migrated the VMs to local 512GB Samsung SSD drives. I can run eight or ten VMs from one SSD drive and still get far better real-world performance than the MSA array with its dual controllers, twelve disks, and 4GB of cache. By the way: I also tried RAID10 on the MSA, which was much better for VMs but still not nearly as good as local SSD.
Yes to SSD, I have two Intel 520's 240GB in RAID1 for the OS drive and a RAID6 10TB setup on a LSI 9260-4i, split between VMs and storage for media. I'm kind of stuck for the mo on AMD as I'm not ready to replace platforms as well as cpus.
I decided to go with 6328 as these are less than half the price of the 16c. I considered the 12c, but since I had an older KDGPE motherboard I would use that to house the older 6128s for more VM's in the future, I'm hoping to see some decent net results in the 10 or so VMs I run.
I guess I now need to look at some kind of SAN setup once both servers are running, what's recommended for shared VM storage?