Is the C6100 still worth it?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

moto211

Member
Aug 20, 2014
53
6
8
39
I'd hold out for LGA 2011/1356 gear at this point.
Any idea when the c6220 was made available by leasing providers? I know the c6220 was released in 2012 but when in 2012? That'll give me an idea of how long I'll need to wait before we start seeing them coming off lease. If its going to be more than about 6 months, I may just pick up a C6100 or SuperMicro 6026TT-HTRF. Power usage on the SuperMicro should be about the same right?
 

moto211

Member
Aug 20, 2014
53
6
8
39
I'm now considering the Supermicro 6026TT-TF. It's got the 4 nodes I'm looking for with 2xL5520 per and no ram or drives. All 12 drive sleds are included and I can get it for about $570 shipped. I already have 4x1TB drives that I'll attach to the first node. Another $70 will put 12gb in the first node and I'll be able to migrate my current ESXi workload off my old 2950 and onto this. Then, as finances permit, I can populate the other nodes with ram and drives.

I realize that I'll give up hot swap nodes, but I can live with that. I'll also have to buy another PSU (the seller only equips it with 1) later down the road for $79. All in though, I can get 4 nodes with 8xL5520, 48gb ram, and 2 psu's for $930 (paid over time) while a similarly equipped C6100 will cost $1000 plus tax and shipping (and all up front). Not to mention I'll be free from the DCS no bios update possibility.

What do you guys think of this option?
 

PersonalJ

Member
May 17, 2013
127
11
18
Any idea when the c6220 was made available by leasing providers? I know the c6220 was released in 2012 but when in 2012? That'll give me an idea of how long I'll need to wait before we start seeing them coming off lease. If its going to be more than about 6 months, I may just pick up a C6100 or SuperMicro 6026TT-HTRF. Power usage on the SuperMicro should be about the same right?
I don't know when they started being leased in large quantities, but I figure most of the equipment was on a three year lease agreement.
 

moto211

Member
Aug 20, 2014
53
6
8
39
Yeah, the C6220's probably aren't far off but I figure that if I'm always waiting for the next best thing, I'll never get anything cool. That, and I have an offer on my 2950 of $200 and I'm keeping my 4x1TB WD Reds ($320 a few months ago). If I wait until the C6220's become available, my current server may be worth close to nothing.

I'm picking up the 6026TT-TF that I mentioned above tomorrow from Unix Surplus. I'm also waiting on delivery of 24GB ram from ebay to populate the first node. All in (factoring the $200 I'll receive for my 2950), it'll cost me less than $500 and I'll have 3 more nodes I can add ram to as my needs (and funds) grow.
 

firenity

Member
Jun 29, 2014
51
8
8
I'm picking up the 6026TT-TF that I mentioned above tomorrow from Unix Surplus. I'm also waiting on delivery of 24GB ram from ebay to populate the first node. All in (factoring the $200 I'll receive for my 2950), it'll cost me less than $500 and I'll have 3 more nodes I can add ram to as my needs (and funds) grow.
I'm interested in the 6026TT series as an alternative to the C6100 too, but as a 2-node version like the 6026TT-HDTRF.
If you've had time to play with yours yet... how do you like it so far?
 

AERuffy

Member
Dec 12, 2013
99
18
18
Just for some insight
C6220's can be had with e5-2620's, 512GB Ram, no drives for < $10k used with a warranty from placed like Xbyte or ebay.
 

moto211

Member
Aug 20, 2014
53
6
8
39
I'm interested in the 6026TT series as an alternative to the C6100 too, but as a 2-node version like the 6026TT-HDTRF.
If you've had time to play with yours yet... how do you like it so far?
I'm loving it. The one limitation that I didn't count on is that each column of three drives in the backplane is powered by one of each of the nodes by a molex jumper from the mobo to the backplane. Not a problem if you are going to run 3 drives or less per node. I wanted to run 6 on one node for a freenas zfs setup so I had to split the jumper to two of the backplane connections so I could have the six drives spin up with server that they're actually attached to. The remaining two columns are each individually wired to single sleds leaving no drives for the 4th sled. Not a problem since the 4th sled is being kept offline as a cold spare. I may eventually look at an avoton build in a 20 drive norco case for freenas and return to the stock 3 drives per node and light all 4 nodes up.

I originally thought that it was loud but figured out that the fans were running full tilt because the were plugged into the backplane without the special jumper that connects to the motherboards for speed control. When I connected the the fans directly to mobo it got much quieter since the speed is now adjusted based on load. I have each set of 2 fans connected to the corresponding top node on each side. Since I only have the two top nodes active at all times (3rd one is for playing and 4th one is cold spare) I didn't want to individually wire each fan to a different node as that would mean that only two fans would be running most of the time.

Oh, if ebay prices are any indication (and in this case they absolutely are), then the 6026TT-HDTRF isn't worth your time. It sells for way more but has 2 less nodes. Its probably because they sold far fewer of them so they are more scarce secondhand. The 6026TT-TF is more plentiful so they sell for less. Just get one and run only 2 nodes. Then you'll still have the ability to add a redundant power supply and have 2 additional nodes to play with if you want to. The only way I would consider the 6026TT-HDTRF is if 1U of space is all I could spare.
 
Last edited:
  • Like
Reactions: firenity

firenity

Member
Jun 29, 2014
51
8
8
Thanks for the info!

One of the reasons why I'm looking for "only" 2 nodes (in 2U, not 1U as you mentioned) is that it allows for more efficient cooling, which could be helpful if I'm going to replace the stock fans with some quieter/less powerful ones. Plus, I only really need 2 nodes.

I'd like to have one node use 4 disks and the other use 8 disks. I'd be fine with running the 4 disk node off the onboard SATA controller (RAID10 probably), but I'd want the 8 disks to be RAID6, which would require an additional HBA. Do you think that setup would be possible (cabling from backplane to HBA, etc.)?
I realize something like that is not officially supported by Supermicro and would prevent me from hot-swapping the 8 disk node, but I could live with that.

About eBay prices... The thing is, I'm from Germany and finding good deals for these types of servers (or finding them at all!) is much, much harder than it is in the US.
The 6026TT-HDTRF just happened to be the only Supermicro 6026TT server available at the moment at a reasonable price. Here's a link to the one I'm considering. This might not be a great deal by US standards but it certainly seems to be over here. I have already found a place where I can get the drive caddies and rails kit for a reasonable price.

What do you think?
 

moto211

Member
Aug 20, 2014
53
6
8
39
My mistake. All 6026 models are 2U.

My guess is that each mobo powers two columns of three drives. So, you could route data connections 8/4 but power would have to be 6/6 or 9/3.
 

firenity

Member
Jun 29, 2014
51
8
8
My guess is that each mobo powers two columns of three drives. So, you could route data connections 8/4 but power would have to be 6/6 or 9/3.
I see. This wouldn't really be a problem if both nodes run 24/7.

Let's say I'm going for 8/4 disks and 9/3 power. If I power off the 8 disk node, the 4 disk node will lose power for 1 disk, which would obviously be problematic.
Is it possible to power disks independently of motherboards?

I also wouldn't mind a 9/3 disk config, but that would mean a more expensive RAID controller (3 or 4 x SFF-8087 instead of 2 x SFF-8087).
 

moto211

Member
Aug 20, 2014
53
6
8
39
Is recommend that you download the documentation for that model. Since that is a hot swap model, its drives may be powered differently than my non hot swap model.
 

firenity

Member
Jun 29, 2014
51
8
8
I found a pretty good picture of the backplane (BPN-SAS-827HD) on eBay:



Looks like the 2 nodes plug into those vertical black connectors on each side.
So you're pretty much forced to use 6 disks/node, I guess :/.
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,511
5,792
113
Is that a standard 24-pin ATX power connector and 2x 8-pin CPU power connectors? How cool is that?
 

firenity

Member
Jun 29, 2014
51
8
8
Maybe I could use the backplane from the 6026TT-TF (moto211's model) in the 6026TT-HDTRF, since both use a 827 chassis? As far as I can, tell the main difference is that one is for hot-swappable nodes and the other one isn't.
The non-hot-swap backplane (BPN-SAS-827T) uses 12x standard 7-pin SAS/SATA data connectors (sorry about the watermark):



I would obviously lose hot-swap capabilities and it would take some adjustments on the nodes themselves, like removing the "node backplane adapter" (or whatever you call it) that would normally plug into the disk backplane connectors mentioned in my previous post:



Of course it's highly doubtful that this would work, the nodes might not even boot when the adapter is removed, or the backplane might not fit, etc.

What do you guys think?
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,511
5,792
113
I would think it has a low probability of success.
 

yu130960

Member
Sep 4, 2013
127
10
18
Canada
I might be rehashing some of the previous comments but I thought I'd add my $0.0192 (I'm Canadian, the money ain't worth as much...).

The C6100 is still a great deal for what it is. The C6105's are nice, but the OP was on a PE2950 and didn't want to invest more in dead end memory. The C6105's are going to be running DDR2, which, while can be found relatively cheap, only gets you up to about 32GB per node cost effectively. If you can live in 96GB across 3 hosts (and many could, that's a decent lab), then that's great. But the expansion is limited.

Best thing about the C6100's is the ability to take the L56xx 6 cores with low power, and the DDR3 RAM. A lot of companies I'm dealing with are yanking 4/8GB DIMM's, especially out of blades that are horribly slot constrained and going to 16 or 32GB where possible. (they really should be buying new machines, it'd be more cost effective, but whatever). This means there is (IMHO) a glut of 4-8GB DDR3 DIMM's out there to be had. I've managed to get mine up to 384GB (96GB/node) and I can't possibly think of what I'll use it all for other than for remote labs for local friends who want to see some VMware goodness.

The S6500 looks neat, but I'm anti-HP/IBM for the reasons mentioned earlier. FOD keys, hardware locks, software restricted to owners under maintenance (even those who were under maintenance when said update was released, can't get it, so hopefully someone updated that hardware before they decommissioned it.). Dell is much better about this.

One issue that either setup has (C6105/C6100/S6500) is that they're all one chassis. Long term, it would be better to have 2x C6100 for 8 nodes vs a 1x S6500 with 8 nodes, in my opinion. Especially for a home lab with no 4 hour response. There's a good chance you, like me, would replace any failed parts via eBay, and that would be a long time to have all nodes down.

The C6100 has the 2pt 10GbE NIC's available, often for $120 or so. You'll probably need to dremel in some holes for them in the chassis, but they work just fine.

Someone had asked if the Dell R610's for ~ $250 would be better - and that's subjective. It's going to end up being 8x the power supplies and cords, but no single failure domain. Should have 4x 1GbE LOM vs 2x, and at least 2 *standard* PCIe slots. This means you could easily go to 12x 1GbE if you wanted, with no issues, and the internal PERC6/I, PERCH700/I won't take up a rear facing slot. iDRAC enterprise (virtual media and IP KVM and vFlash) sell for $20 on eBay. A consulting company I do work for recently suggested they had $8K to spend on "a server" and I suggested they look at 4x R610/2x6c/96GB boxes to replace 4x 2950, and they could easily just keep the 4th as a cold/hot spare, even if the old hardware does fail. They'll still be significantly under budget. The other benefit of the R610's is it's not an all or nothing deal. Need to sell one to get some cash for anther project? You can. Want to move the 4th node somewhere to be a DR box? No problem. Can't really do that with the C6100.

There was a question about power as well, and last I checked, I was pulling about 330w on my unit. Remember that if you're doing something with VMware, you can always use DRS and DPM to power down unneeded nodes (I do) to save on power.

With the announcement of VMware's EVO:RAIL though, you can see that even the big OEM's like the concept. ;)
Don't want to raise a dead post, but I took this advice and grabbed 4 x Dell R610 for $190 each with 2 x L5630 and I can't get over how quiet these things are. Not sure how I am going to use them but the possibilities are endless. Not sure why there are not more postings about going with the R610 given the price and home friendly noise factor.
 

yu130960

Member
Sep 4, 2013
127
10
18
Canada
Asking price was $240. Got a volume/local pickup discount at 4 for $190 each.

Decent shape, the normal rack rash. The logs show that the servers were first deployed late 2010, early 2011.

INCLUDES 2 x L5630s with heatsinks, 6 gigs of ram (6 1 gig ECC Hynix modules from DELL), DVD optical drive, and 2 crazy no name DVR capture cards for CCTV (junk?).

NOT INCLUDE Rails, Hard drives, HD Caddies (although 5 of the six blanks are there), DRAC Enterprise, SD Card reader nor Bezel

Servers are located in Rochester and I think he has 15 or so left.

Does anyone recognize this DVR 16 channel card and does anyone want 8 of them?

?
 
Last edited:

Jason Gould

New Member
Nov 3, 2014
2
1
1
38
I have a question. Does anyone know how the Supermicro twin server options compare to the C6100? I'm sort of a big fan of Supermicro products. I just find them easier to deal with than OEM.

This is what I've gathered but curious if anyone has any input on advantages/disadvantages;
Unless otherwise noted, all are 4node and redundant 1400W PSU.
I'm going to try to list the differences between models and the C6100.

6026TT-H (12disk models)
Supermicro 6026TT-HTRF : Base model. No mezz card or built in Infiniband.
Supermicro 6026TT-HIBXRF: 20Gbps Infiniband.
Supermicro 6026TT-HIBQRF: 40Gbps Infiniband.

2026TT-H (24disk models)
Supermicro 2026TT-HTRF: Base model. No mezz card or built in Infiniband.
Supermicro 2026TT-HIBXRF: 20Gbps Infiniband.
Supermicro 2026TT-HIBQRF: 40Gbps Infiniband.

2026TT-H6 (24disk models w/ LSI 2108 w/ Hardware RAID via BPN-ADP-SAS2-H6iR)
Supermicro 2026TT-H6RF: Base model. No mezz or Infiniband.
Supermicro 2026TT-H6IBXRF: 20Gbps Infiniband.
Supermicro 2026TT-H6IBQRF: 40Gbps Infiniband.
*These are the same chassis as the other 2026TT, just has a slightly different motherboard and backplanes, perhaps some difference in cabling I would suspect.

6026TT-B (12 disk models w/ Intel® I/OAT 3 & VMDq)
6026TT-BTRF: Base model. No mezz or Infiniband.
6026TT-BIBXRF: 20Gbps Infiniband.
6026TT-BIBQRF: 40Gbps Infiniband.
*people probably aren't that interested in this unless you are in a big environment.

There seems to only be 2 chassis used, a 24bay and a 12bay.
CSE-217HQ-R1400B (24 disk)
CSE-827H-R1400B (12 disk)
So you could change internals to get a different setup if necessary. I believe the Infiniband stuff is part of the motherboard so you would have to change the MB and in some bases like the 2026TT-H6 models it looks like you would need to change backplanes. Might even have some different connectors.

It would seem to me that people would find the 6026TT-HTRF and 2026TT-HTRF a good fit in replacement of a 6100 depending on if you needed 12 or 24 disks. You would need to look for particular models/motherboards if you wanted infiniband, but I suspect most don't need this. One of the most interesting ones to me is the 2026TT-H6RF for it's 24disks and hardware RAID via an LSI 2108 SAS2 Controller. I believe this is a very popular RAID controller and ESXi certified. I would assume you could use passthrough with it. I know the LSI 2108 can support BBU and caching, but I suspect it isn't possible here (if it is that would be awesome).

I don't think the chassis work with any motherboards that support E5's (x9 socket 1356 boards or x10 2011). It looks like only X8 motherboards (Socket 1366) that support the Xeon 5600/5500 series. However if you look at the DP 2011 motherboards from Supermicro (like this X10DRT-H) it says the dimensions are 6.8" x 16.64". The two cases mentioned earlier say they supports twin motherboard sizes up to 6.8" x 16.64". So you never know.
 
Last edited:
  • Like
Reactions: rnavarro