I might be rehashing some of the previous comments but I thought I'd add my $0.0192 (I'm Canadian, the money ain't worth as much...).
The C6100 is still a great deal for what it is. The C6105's are nice, but the OP was on a PE2950 and didn't want to invest more in dead end memory. The C6105's are going to be running DDR2, which, while can be found relatively cheap, only gets you up to about 32GB per node cost effectively. If you can live in 96GB across 3 hosts (and many could, that's a decent lab), then that's great. But the expansion is limited.
Best thing about the C6100's is the ability to take the L56xx 6 cores with low power, and the DDR3 RAM. A lot of companies I'm dealing with are yanking 4/8GB DIMM's, especially out of blades that are horribly slot constrained and going to 16 or 32GB where possible. (they really should be buying new machines, it'd be more cost effective, but whatever). This means there is (IMHO) a glut of 4-8GB DDR3 DIMM's out there to be had. I've managed to get mine up to 384GB (96GB/node) and I can't possibly think of what I'll use it all for other than for remote labs for local friends who want to see some VMware goodness.
The S6500 looks neat, but I'm anti-HP/IBM for the reasons mentioned earlier. FOD keys, hardware locks, software restricted to owners under maintenance (even those who were under maintenance when said update was released, can't get it, so hopefully someone updated that hardware before they decommissioned it.). Dell is much better about this.
One issue that either setup has (C6105/C6100/S6500) is that they're all one chassis. Long term, it would be better to have 2x C6100 for 8 nodes vs a 1x S6500 with 8 nodes, in my opinion. Especially for a home lab with no 4 hour response. There's a good chance you, like me, would replace any failed parts via eBay, and that would be a long time to have all nodes down.
The C6100 has the 2pt 10GbE NIC's available, often for $120 or so. You'll probably need to dremel in some holes for them in the chassis, but they work just fine.
Someone had asked if the Dell R610's for ~ $250 would be better - and that's subjective. It's going to end up being 8x the power supplies and cords, but no single failure domain. Should have 4x 1GbE LOM vs 2x, and at least 2 *standard* PCIe slots. This means you could easily go to 12x 1GbE if you wanted, with no issues, and the internal PERC6/I, PERCH700/I won't take up a rear facing slot. iDRAC enterprise (virtual media and IP KVM and vFlash) sell for $20 on eBay. A consulting company I do work for recently suggested they had $8K to spend on "a server" and I suggested they look at 4x R610/2x6c/96GB boxes to replace 4x 2950, and they could easily just keep the 4th as a cold/hot spare, even if the old hardware does fail. They'll still be significantly under budget. The other benefit of the R610's is it's not an all or nothing deal. Need to sell one to get some cash for anther project? You can. Want to move the 4th node somewhere to be a DR box? No problem. Can't really do that with the C6100.
There was a question about power as well, and last I checked, I was pulling about 330w on my unit. Remember that if you're doing something with VMware, you can always use DRS and DPM to power down unneeded nodes (I do) to save on power.
With the announcement of VMware's EVO:RAIL though, you can see that even the big OEM's like the concept.