Dell C6100 XS23-TY3 2U 4-Node (8 CPU) Cloud Server

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

hsben

Member
Sep 10, 2014
70
8
8
41
Anyone have any knowledge on the Supermicro 6026TT-TF? I'm very interested in one of these nodes, but the C6100s im finding on ebay are way more expensive.

I prefer a mostly barebones setup as I have a 2xL5520 setup already. The barebones 6026TT is ~400$ for 4 nodes and the C6100 is 625$ for 2 nodes and some ram and cpus.

Also how noisey are these things? Is there a good possibility I can change out the fans to something quieter?
 

jaymemaurice

New Member
Sep 10, 2014
9
0
1
37
Hey guys. Long time lurker here. I have a node in my lab that just powered itself off and will not power back on - the BMC heartbeat light flashes, the IPMI SEL has nothing meaningful. The power light on the front panel never comes on. The power button on the front panel, back of the node, and through IPMI do nothing. I have removed all but one CPU and two sticks of RAM in the right place, tried a different processor, confirmed that the bad node is not the slot in the chassis.

Soooo I am thinking about grounding out the PWR_ON pin on the power supply connector on the interposer board.

Before that, anything else you guys recommend? Is there any surface mount fuse, capacitor, transistor or diode that I should check that is in responsible for the power on function on the motherboard?

So odd that it would just power off like that... it's connected to a UPS and the IPMI is still reachable and detects the PS1 and CPU presence in the sensor data... is there a way in the Dell branded IPMI to get more detailed post codes? I see them mentioned in page 23 of this guide... Dell PowerEdge C6100 | Hardware Owner's Manual - should they be in the SEL?


I assume I should probably use a pull down resistor to ground out the PWR_ON pin but noticed most of you who are hacking it out of the chassis just jumper with a wire... obviously I'll still have the board plugged in... any opinions here?
 

jaymemaurice

New Member
Sep 10, 2014
9
0
1
37
So I disconnect the PS_ON from the interposer to motherboard cable and grounded on the interposer - no change. Actually come to think of it, it was a silly idea since I remember now the hard drives for that node spun up so the issue must be elsewhere on the board or interposer. It clearly has 12v standby power and ground... so maybe I should start checking the power lines and see if anything is open
 

moto211

Member
Aug 20, 2014
53
6
8
39
Anyone have any knowledge on the Supermicro 6026TT-TF? I'm very interested in one of these nodes, but the C6100s im finding on ebay are way more expensive.

I prefer a mostly barebones setup as I have a 2xL5520 setup already. The barebones 6026TT is ~400$ for 4 nodes and the C6100 is 625$ for 2 nodes and some ram and cpus.

Also how noisey are these things? Is there a good possibility I can change out the fans to something quieter?
I just picked up a 6026TT-TF. I'm very happy with it. mrrackables has the barebones unit for $349 and the one I got was equipped with 8xL5520 for $499. I really like that it came with all 12 drive trays without having to buy them separately. I did give up hot swap functionality but I can live with that. It does make adding RAM more difficult but I don't do that very often. If you need hot swap, you should be looking at the 6026TT-HTRF. The HTRF 4 node barebones is like $575. My main reason for exploring the 6026TT line was that I didn't to get stuck with some DCS bs.

And yes, the fans are very loud. They are 80mm x 38mm thick fans with standard 4 pin pwm connectors. If I didn't have a sound deadened xrackpro I would change mine out.
 
  • Like
Reactions: hsben

c6100

Member
Oct 22, 2013
163
1
18
USA
Can anyone share their MLB TEMP 2 temp? Mine is showing 77 C which is why I believe it is rebooting every few days.
 

Dk3

Member
Jan 10, 2014
67
21
8
SG
Can anyone share their MLB TEMP 2 temp? Mine is showing 77 C which is why I believe it is rebooting every few days.
I did once get into 78C but did not reboot. Usually is range from 55 to 65 depending on ambient temperature.

I added a 40mm fan near the heatsink that reduce the temperature. Maybe u can try it if the mezzanine slot is unused.
 

c6100

Member
Oct 22, 2013
163
1
18
USA
I did once get into 78C but did not reboot. Usually is range from 55 to 65 depending on ambient temperature.

I added a 40mm fan near the heatsink that reduce the temperature. Maybe u can try it if the mezzanine slot is unused.
I do use the mez for a 10 gb card. How did you end up mounting the fan?
 

devioustrap

New Member
Mar 6, 2013
8
0
1
I know this is a huge thread, but I'm hoping someone can shed some light on what's shared between those nodes in terms of IPMI/BMC.

I have a c6100 rack in a DC as a production server and I've noticed that sometime in the last month or so, I lost IPMI access to all nodes (likely simultaneously). Is the IPMI bios somehow shared between all of the nodes so that failure somewhere can take it out, across the whole server?
 

lmk

Member
Dec 11, 2013
128
20
18
anytime I have had the IPMI go wonky (really really slow webgui, KVM, etc) on the nodes, a power cycle on the server or the switch has always restored speed
 

Dk3

Member
Jan 10, 2014
67
21
8
SG
I do use the mez for a 10 gb card. How did you end up mounting the fan?

Sorry for the late reply, was travelling the past week. I uses Usb 5v fan to attach the fan.



Without mezzanine slot use i simply screw on the fan on the heat sink.

With mezzanine slot in use, i uses a 50mm fan and simply lay it in between with fan blow direction towards the rear.

Too bad i do not have photo for this as the server is in used. But i'm afraid that yours cant work due to your 10gb card is attached with heat sink making it too constraint. For mine it is raid y8y69 without heatsink, therefore easier.

Also note that i only use it for Node 1 & 3 as there is ventilated hole on the casing
 
Last edited:

StopDropRoll

New Member
Oct 29, 2014
13
5
3
54
Does anyone know what the part number for the fan plug (mine is actually a DCS6005) or have a spare they would be willing to sell? I clipped my Delta's plug too short when I was swapping fans.

Thanks!

p.s. this forum has been a HUGE help!
 

Injector2

New Member
Dec 14, 2014
4
0
1
37
Does any one else have blades without the second mezz card slot? I purchased some 10Gb cards to only realize the connector on my blades was non-existent, so at this point i have about $1200 worth of NICs that I can't use.

I would love to trade blades (minus CPU and RAM) with anyone who's not planning on using their mezz slots (I will pay for all the shipping)

 

MACscr

Member
May 4, 2011
119
3
18
Can someone tell me the dimm slot ordering? I have two of these systems and im finding that some of the ram is bad. Used dmidecode to show which dimms are not showing, but not sure how those numbers related to the physical dimm slots.
 

comnam90

New Member
Feb 22, 2015
3
0
1
32
Hey, Thinking about getting one of these with 4 nodes, but I want to be able to make good use of storage spaces (win 2012 R2)
Is it possible to 'trick' or 'force' a 4 node unit to only present the storage to 3 nodes? Like unplugging the storage controller in node 4 or something? Just so that the first 3 nodes get 4 disks each
 

StopDropRoll

New Member
Oct 29, 2014
13
5
3
54
Hey, Thinking about getting one of these with 4 nodes, but I want to be able to make good use of storage spaces (win 2012 R2)
Is it possible to 'trick' or 'force' a 4 node unit to only present the storage to 3 nodes? Like unplugging the storage controller in node 4 or something? Just so that the first 3 nodes get 4 disks each
I have a DCS6005, but I believe the setup should be similar.

On mine, each mobo has 6 sata ports which connect the backplane. You can plug them in to the backplane however you please. To do this, you will need to remove the fans and it would be advisable to have a set of needle nose pliers or something that will allow you to plug the cables in tight access. Also, take note of the tags on the cords so you can make a diagram of where your drives are.

As i've said, I have a 3 node DCS6005. One of the nodes is being used for FreeNAS with 6 drives plugged into it.

If you need any pictures or anything, let me know.
 
  • Like
Reactions: comnam90

comnam90

New Member
Feb 22, 2015
3
0
1
32
I have a DCS6005, but I believe the setup should be similar.

On mine, each mobo has 6 sata ports which connect the backplane. You can plug them in to the backplane however you please. To do this, you will need to remove the fans and it would be advisable to have a set of needle nose pliers or something that will allow you to plug the cables in tight access. Also, take note of the tags on the cords so you can make a diagram of where your drives are.

As i've said, I have a 3 node DCS6005. One of the nodes is being used for FreeNAS with 6 drives plugged into it.

If you need any pictures or anything, let me know.
Cool! Well hopefully as far as drives go, the C6100 is the same as the DCS6005

Some pictures would be awesome :)
 

Rain

Active Member
May 13, 2013
276
124
43
Hey, Thinking about getting one of these with 4 nodes, but I want to be able to make good use of storage spaces (win 2012 R2)
Is it possible to 'trick' or 'force' a 4 node unit to only present the storage to 3 nodes? Like unplugging the storage controller in node 4 or something? Just so that the first 3 nodes get 4 disks each
4 drives per node is going to be tricky. @StopDropRoll is correct, the back plane has individual SAS/SATA plugs on it, but the way they connect to the imposers is fairly weird (to say the least) and, AFAIK, not very standard.

The imposer boards connect to the backplane with 3-lane SFF-8087 reverse breakout cables, not 4-lane (ie: 4-plug) cables. There are two SFF-8087 plugs per imposer, but only 6 SAS/SATA plugs are available (consider that even the 24 2.5" drive version has only 6 drives per node). You might need to use a lot of extra SFF-8087 reverse breakout cables to ensure you get the right 4 drives mapped to the right four nodes, because the weird 3-lane cables that come with the chassis are not going allow you to easily bring "just" 4 drives to a single imposer board. The included SFF-8087 breakout cables are also extremely short.

It's (realativly) easy to do 6 drives to one node (or 6 drives to two nodes), but anything else will be tricky. If you purchase SFF-8087 breakout cables, make sure they're as slim as possible, space is tight through the cutouts.