Is the C6100 still worth it?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

moto211

Member
Aug 20, 2014
53
6
8
39
A bit of background before my main question:

My home lab currently consists of a Dell PowerEdge 2950 Gen III with Dual L5420's and 16GB RAM. It has an 8x2.5" backplane connected to a PERC 6/i that is occupied with 4x73GB 10k rpm SAS drives in RAID5 (where the esxi machine files and OS system drives are stored) and 4x1TB WD Red SATA in RAID10 (drives occupied entirely by 1 vmdk that's hooked to my media server). I like play with a lot of things and am pushing the limits of what this server can handle. Processing power has at times has been constrained but that's not usually the problem. The main problems are that it only accommodates 8 DIMMs (currently 8x2GB) and while it's only about $50 to buy 8x4GB on ebay, I don't think it's wise to put any more money into this considering that I'll run up against the RAM wall again in like 6 months. The other problem is that the largest affordable 2.5" drives available are 1TB. So getting anymore storage into this thing would require replacing the 4x73GB array with 4 more 1TB drives at a cost of ~$300. I've come to the conclusion that putting more money into this won't delay the inevitable long enough to justify the cost. In addition, I've recently added a Precision T3400 running Server 2012 R2 into my network to accommodate 4x3.5" 1TB drives in RAID10 as a iSCSI target (esxi as the initiator and passing the array to my media server). So, not enough drive capacity is the huge limitation here. To top it all off, the combined idle power draw of my equipment is somewhere around 350w and I'm don't feel like I'm getting nearly enough horsepower for it.

So, I apologize for the long winded life story but here's the question:

Is the C6100 at current prices still worth it? And considering how much cheaper it is, is the C6105 a better value? Here's the systems that I'm contemplating:
Dell C6100 C6105 Cloud Server 6X 1 8GHz AMD 6 Core Hex Core 96GB RAM 3X 250GB | eBay
I realize that this one is limited to 3 nodes and does not have the Intel benefit of hyperthreading, so 36 threads in total. But damn it's cheap! If I had a server that I needed 96GB for, I would buy this and scavenge the RAM and put it right back on ebay.
OR
Dell C6100 XS23 TY3 24 Bay 4 Node Server 8x E5520 QC 48GB RAM LSI MEZZANINE RAID | eBay
This one is obviously everyone's favorite. 32 cores, 64 threads and a lot of support resources here. But its got half the RAM and is twice the price. I can get 2 C6105s for the price of this one. I realize that cost per GB is greater for 2.5" drives, but I can cram a bunch of them in it.

And just for the heck of it, has anybody given one of these a try?:
HP Proliant S6500 4U 4X SL170S G6 8x Quad Core E5504 48GB Better Than C6100 | eBay
 

HellDiverUK

Active Member
Jul 16, 2014
290
52
28
47
I can't help think that a modern tower server with some big drives in it would be far more efficient. Buying old stuff off eBay means all you end up with is old stuff.

Buy cheap, buy twice.
 

mattlach

Active Member
Aug 1, 2014
323
89
28
I can't help think that a modern tower server with some big drives in it would be far more efficient. Buying old stuff off eBay means all you end up with is old stuff.

Buy cheap, buy twice.
I disagree. Nothing wrong with older stuff.

I mean, CPU wise, advances have been very slow over the last 5-10 years, so older CPU's are just fine. Drives are replaceable, and nothing wrong with older motherboards.

I'd much rather have a used server off ebay, than build somethign with conumer, or consumer like hardware. (just ditch the old hard drives, as they are likely junk at this point)

I just picked up a HP DL180 G6 with dual L5640's on eBay for a song. Stuffed 96GB in it, and it is rapidly in process of becoming my new AIW VMWare box. Very happy with the decision, even though it took some modifications to the fans to make the noise levels acceptable.

The backplane/SAS expander with 12 3.5" drive bays is fantastic to have. I plan on using all of them.

I mean, you could build your own server with a Norco case, supermicro board, and cpu's, but honestly, you'd pay almost as much for the Norco case alone, as I paid for the entire server.
 

moto211

Member
Aug 20, 2014
53
6
8
39
Thank you for the input HellDriver. Mattlach, you seem to get where I'm coming from. Lots of power/capability in a small package.

I guess I should have been a little clearer in that efficiency is only a minor concern. While I don't want the power bill that comes along with pulling a kilowatt from the wall at all times, I am ok with the 300-400 watts at idle or minimum load that I'm currently pulling. I just don't feel like I'm getting enough power or capability currently from the power that I'm using.

I really like the idea of multiple nodes in 2U of space. I have 4U total available in my 4U XRackPro2 (sound deadened mini cabinet). I can put either one of those machines in and also one of these:
Dell UltraDenseStorage J23 SAS & SATA CLOUD STORAGE ARRAY JBOD CHASSIS
which would alleviate any concerns of drive capacity. One node will be a dedicated FreeNAS machine with an HBA connected to the storage array and hosting iSCSI target(s) for use by my other servers. Of the other 2 or 3 nodes (depending on which model I choose) only one will be powered continuously running ESXi hosting my DC for my domain and media server. Having the additional node (or 2) will allow me the headroom to add in any additional servers or services that I decide I want to run in my lab or add to my home network. It will also allow me to play with HA, vMotion, clustering, failover, and other scenarios that while not impossible to test on a single piece of hardware it is much more difficult and restrictive to implement.

So really, its going to be a either a C6105 or a C6100. I'm just having a hard time deciding if the price break is worth the reduction in available horsepower and nodes.
 

mattlach

Active Member
Aug 1, 2014
323
89
28
I guess I should have been a little clearer in that efficiency is only a minor concern. While I don't want the power bill that comes along with pulling a kilowatt from the wall at all times, I am ok with the 300-400 watts at idle or minimum load that I'm currently pulling. I just don't feel like I'm getting enough power or capability currently from the power that I'm using.

Older systems don't necessarily have to be THAT inefficient either.

I just received my first Kill-A-Watt from Amazon the other day. My DL180 G6 with dual L5640's (12 cores, 24 threads total) and 96GB of 1.5V RAM installed pulls about 80W at idle, which is pretty much equivalent to my old server built around a consumer AMD FX-8350 chip.

I really like the idea of multiple nodes in 2U of space.

This is the part that I don't understand. I only discovered multiple node systems (like the C6100) recently. What is the benefit of this kind of system? The appeal to me is instead to have a single box with gobs of RAM and cores in it, and run EVERYTHING on it using ESXi.

I'd be curious what about multiple nodes it is that draws people in.

Processing power has at times has been constrained but that's not usually the problem.

just be aware that AMD CPU's do less per Ghz than Intel CPU's, and the AMD CPU's in the C6100 are clocked pretty low at 1.8Ghz.

The listing is vague, but I'd imagine those are Lisbon core Opterons, so pre-Bulldozer architecture.

If I had to guesstimate, I'd expect each one of those 1.8Ghz opteron cores to provide only ~65% of the performance of each of the 2.5Ghz L5420 cores, approximately 70% of each of the E5520 2.26ghz cores and about 85% of the 1.86ghz cores in the E5502 at base clock. Also furthermore consider that all the Intel chips mentioned will "turbo" up a single core significantly, if otherwise mostly idle to increase performance. The Lisbon era Opterons do not do this.

So, if you are "at times" CPU constrained, just be careful how much you downgrade the CPU power.
 

Marsh

Moderator
May 12, 2013
2,644
1,496
113
This is the part that I don't understand. I only discovered multiple node systems (like the C6100) recently. What is the benefit of this kind of system? The appeal to me is instead to have a single box with gobs of RAM and cores in it, and run EVERYTHING on it using ESXi.
Dell C6100 was very popular last year, one main reason for me is the price.
For few months last year, you could buy a Dell C6100 with 8 x L5520, 96GB memory, entire chassis for the price between approx $650 to $850. Down to $600 with ebay bucks and ebay gift card incentives.

24 x 4 GB mem stick at today's price approx $20 to $25 per 4 GB stick = total $480 to $600
8 x L5520 cpu = 8 x $25 total $200
Chassis , power supply , heatsink is free.

I purchased 3 x Dell C61000 for the memory sticks, when I only need one Dell C6100, or 1/2 of a Dell C6100.
 

MatrixMJK

Member
Aug 21, 2014
70
27
18
55
The driving force for me to the multi node systems is using the machine as a home lab for working with multiple storage and virtualization scenarios. Being able to set up an ESX cluster and to real HW failover has helped a ton with implementing those features and others at work.
 

moto211

Member
Aug 20, 2014
53
6
8
39
This is the part that I don't understand. I only discovered multiple node systems (like the C6100) recently. What is the benefit of this kind of system? The appeal to me is instead to have a single box with gobs of RAM and cores in it, and run EVERYTHING on it using ESXi.

I'd be curious what about multiple nodes it is that draws people in.
The appeal is being able to easily play with higher level hypervisor features like HA, vMotion, clustering, failover, and multinode cloud OS's that can leverage the combined resources of all included nodes (like OpenStack). I realize that many of those scenarios can be tested in the lab using VM's, but its more restrictive since you're dealing with nested hypervisors in that scenario.

In further investigation, I have discovered that the C6105 that I linked to is most likely a C6005 which is to be avoided. They're cold swap only and I'd most likely be stuck on whatever BIOS version they come with. So, C6100 is the only option if I'm going to get a c61xx series and keep it affordable (so no C62xx series).

So now I have a new dilemma, do I get the C6100 that I originally linked plus the 23 drive JBOD chassis that I also linked and an HBA adapter for my FreeNas node?

Or, do I get 3 or 4 of these?:
Dell PowerEdge R610 2X 2 26GHz L5520 12GB RAM 2X 73GB SAS 6IR DRAC | eBay
2 of them might do it but then the multinode stuff I want to play with would have to be virtualized since one of them will be dedicated to FreeNas. That would leave me only one unit for my virtual DC and media server and general labbing.
 

mattlach

Active Member
Aug 1, 2014
323
89
28
The driving force for me to the multi node systems is using the machine as a home lab for working with multiple storage and virtualization scenarios. Being able to set up an ESX cluster and to real HW failover has helped a ton with implementing those features and others at work.
The appeal is being able to easily play with higher level hypervisor features like HA, vMotion, clustering, failover, and multinode cloud OS's that can leverage the combined resources of all included nodes (like OpenStack). I realize that many of those scenarios can be tested in the lab using VM's, but its more restrictive since you're dealing with nested hypervisors in that scenario.
Ahh, OK that makes sense.

It seems my motivation for using ESXi is different than most on these forums.

Mine isn't a "lab" per se, but rather a "home production box" which hosts my NAS, my router/firewall, my Ubuntu server, and my MythTV DVR server among other things.

The appeal of ESXi is not necessarily to learn anything for work ( I don't now, never have and never plan on working in IT) though learning a little about virtualization has been fun. My primary motivation was consolidation. Now I can do everything I would otherwise need 5 servers for, (or at least 2 or 3, if I were willing to combine stuff in linux with tricky and sensitive dependencies) all in one.

So I run a standalone ESXi box, with no Vcenter. I haven't touched HA, vMotion, Clutsering, Failover or Multinode, and have no interest in doing so unless I absolutely have to in order to get my "home production box" to work. the most advanced thing I've done is Direct I/O forwards (the simple type, not SR-IOV) because I needed to pass through a storage controller for FreeNAS.
 

MiniKnight

Well-Known Member
Mar 30, 2012
3,072
973
113
NYC
My take on this:

Consolidation makes sense. If you do go C6100 - get L series chips and do not spend more than $15/ chip for the L55xx generation. The L56xx generation had AES-NI, 32nm and potentially more cores.

Then again, you could probably get a C6100 + 2x L5520 in each sled w/ 48GB dirt cheap compared to other systems.

The dual socket systems aren't moving to 14nm for some time so with the L5600's you are only 1 process node behind. It doesn't sound like you need fancy acceleration like AVX since you are not doing stuff like astronomy calculations.
 

moto211

Member
Aug 20, 2014
53
6
8
39
MiniKnight,

Are you suggesting that it might be better to get a barebones C6100 and purchase the CPUs and RAM separately?
 

Marsh

Moderator
May 12, 2013
2,644
1,496
113
Wondering, did you have a chance to read this main site article
http://www.servethehome.com/xeon-en-dead-long-live-en/

I have picked some incredible LGA1356 bargains on Ebay in the previous 12 months. Like E5-2430 C1 step CPU for $25 each.
There is still cheap LGA1356 components on Ebay today.

LGA1356 platform have features and power saving for home use. My dual E5-2430 rig with 1 hard disk, and 3 SSD idles at 60w.

I just picked up a $100 E5-2403 from ebay
Intel Xeon Processor CPU E5 2403 | eBay
and new Intel S2400SC2 Server motherboard for $175
New Intel S2400SC2 Server Motherboard Intel C600 A Chipset Socket B2 LGA 135 0675901150231 | eBay

This setup makes a nice ZFS server
 
Last edited:

moto211

Member
Aug 20, 2014
53
6
8
39
Ok, so the J23 JBOD chassis may be no go also. Looks like it was a custom DCS job and trays are nowhere to be found unless you buy a unit that already has the trays and pay way too much for it. It's so cheap though. Might it be worthwhile to buy it and harvest the guts out of it to build my own SAS expansion chassis?

Also, since I'm no longer needing to keep 2U reserved for the JBOD chassis, I'm leaning more heavily toward 4xR610's. Cost will be about the same as the C6100 that I linked and the combined specs will be pretty much the same too. 8xL5520's, 48GB RAM, 24x2.5" bays and going this route will even net me 8x73GB SAS drives. Not mention I'll have much better manageability and less single points of failure. Power usage will be a bit higher but still within my acceptable envelope at a combined ~280 watts idle and ~600 watts full load. 24/7 usage will be only 2 of the units so 140 and 300.
 

Patriot

Moderator
Apr 18, 2011
1,450
789
113
Just FYI to those who consider the HP Proliant S6500, better stay away.
:facepalm:
The 6100 is a 2u, the sl6500 is a 4u. Other than confusion in that listing I don't see any reason why not to use a SL6500... I have a pair of them.
That said I am running all Gen8 servers in them.
 

moto211

Member
Aug 20, 2014
53
6
8
39
:facepalm:
The 6100 is a 2u, the sl6500 is a 4u. Other than confusion in that listing I don't see any reason why not to use a SL6500... I have a pair of them.
That said I am running all Gen8 servers in them.
Agreed. The s6500 that I linked isn't a bad setup. The ebay listing is a bit misleading though. He shows 8 nodes but is actually selling a unit with 4 nodes. That said, its not much more money than a similarly equipped C6100 but has room for expansion to twice as many nodes. If housing a bunch of hot swappable drives is a priority, you might be better off with 2xC6100's though (hence why I'm not looking at this as a viable option). Also, HP's recently adopted policy of only allowing firmware and driver downloads by customers with active support agreements make their hardware less desirable on the secondhand market.
 

Patriot

Moderator
Apr 18, 2011
1,450
789
113
Moto...that is the same policy as dell and IBM and cisco...
And it isn't a recent policy... They just make you log in and prove it now.
Yeah it kinda sucks... but it really isn't a reason to go with lesser hardware.
If you just have to have an update warranty 1 system and use the roms for all of them.

I think you confused the seller... as he sells both 4 and 8 node setups.
 

mattlach

Active Member
Aug 1, 2014
323
89
28
Moto...that is the same policy as dell and IBM and cisco...
And it isn't a recent policy... They just make you log in and prove it now.

I think you confused the seller... as he sells both 4 and 8 node setups.
Yeah, I can vouch for esiso.

I've done business with them in the past. They make things right if there are problems. (they do seem to get a little confused on occasion though but that's probably not confusing considering the amount of stuff of varying vintages they go through)
 

moto211

Member
Aug 20, 2014
53
6
8
39
Moto...that is the same policy as dell and IBM and cisco...
And it isn't a recent policy... They just make you log in and prove it now.

I think you confused the seller... as he sells both 4 and 8 node setups.
If that's Dell's policy too, then they don't enforce it at all. I can't tell you how many Dell servers I've had or worked on that are well outside their support agreement and never had an issue getting drivers and firmware. Now, Dell does have a habit of just not publishing any more updates once of majority of users of that platform are likely out of support. But they don't pull access to what's already been published.

Not saying he's intentionally trying to get over. Quite the opposite. The title does clearly say 4 x SL170s. Just that he may have gotten mixed up when made the listing that I posted. The details say eight nodes. But it also says 8 cpus when they're all supposed to be dual cpu nodes. Should be 16 cpus. That is, unless they're all dual cpu nodes that only have one cpu socket populated (not even sure if that supported with those nodes?). Judging by the Q&A from another one of his listings, I'd assume my suspicions are correct. True 8 node units go for more too. Not anything to worry about, the price is still right for a 4 node unit. I'd just want to verify that I'm getting what I'm expecting before purchasing, if I were the one buying it.
 

NetWise

Active Member
Jun 29, 2012
596
133
43
Edmonton, AB, Canada
I might be rehashing some of the previous comments but I thought I'd add my $0.0192 (I'm Canadian, the money ain't worth as much...).

The C6100 is still a great deal for what it is. The C6105's are nice, but the OP was on a PE2950 and didn't want to invest more in dead end memory. The C6105's are going to be running DDR2, which, while can be found relatively cheap, only gets you up to about 32GB per node cost effectively. If you can live in 96GB across 3 hosts (and many could, that's a decent lab), then that's great. But the expansion is limited.

Best thing about the C6100's is the ability to take the L56xx 6 cores with low power, and the DDR3 RAM. A lot of companies I'm dealing with are yanking 4/8GB DIMM's, especially out of blades that are horribly slot constrained and going to 16 or 32GB where possible. (they really should be buying new machines, it'd be more cost effective, but whatever). This means there is (IMHO) a glut of 4-8GB DDR3 DIMM's out there to be had. I've managed to get mine up to 384GB (96GB/node) and I can't possibly think of what I'll use it all for other than for remote labs for local friends who want to see some VMware goodness.

The S6500 looks neat, but I'm anti-HP/IBM for the reasons mentioned earlier. FOD keys, hardware locks, software restricted to owners under maintenance (even those who were under maintenance when said update was released, can't get it, so hopefully someone updated that hardware before they decommissioned it.). Dell is much better about this.

One issue that either setup has (C6105/C6100/S6500) is that they're all one chassis. Long term, it would be better to have 2x C6100 for 8 nodes vs a 1x S6500 with 8 nodes, in my opinion. Especially for a home lab with no 4 hour response. There's a good chance you, like me, would replace any failed parts via eBay, and that would be a long time to have all nodes down.

The C6100 has the 2pt 10GbE NIC's available, often for $120 or so. You'll probably need to dremel in some holes for them in the chassis, but they work just fine.

Someone had asked if the Dell R610's for ~ $250 would be better - and that's subjective. It's going to end up being 8x the power supplies and cords, but no single failure domain. Should have 4x 1GbE LOM vs 2x, and at least 2 *standard* PCIe slots. This means you could easily go to 12x 1GbE if you wanted, with no issues, and the internal PERC6/I, PERCH700/I won't take up a rear facing slot. iDRAC enterprise (virtual media and IP KVM and vFlash) sell for $20 on eBay. A consulting company I do work for recently suggested they had $8K to spend on "a server" and I suggested they look at 4x R610/2x6c/96GB boxes to replace 4x 2950, and they could easily just keep the 4th as a cold/hot spare, even if the old hardware does fail. They'll still be significantly under budget. The other benefit of the R610's is it's not an all or nothing deal. Need to sell one to get some cash for anther project? You can. Want to move the 4th node somewhere to be a DR box? No problem. Can't really do that with the C6100.

There was a question about power as well, and last I checked, I was pulling about 330w on my unit. Remember that if you're doing something with VMware, you can always use DRS and DPM to power down unneeded nodes (I do) to save on power.

With the announcement of VMware's EVO:RAIL though, you can see that even the big OEM's like the concept. ;)