Dell C6100 XS23-TY3 2U 4-Node (8 CPU) Cloud Server

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Toddh

Member
Jan 30, 2013
122
10
18
Yes it is. Opens up some nice options for things to do with C6100.

Still to be tested but you may be able to take the 24 port version and create a storage server that runs from a single node and connects all 24 disks.


.
 
Last edited:

legen

Active Member
Mar 6, 2013
213
39
28
Sweden
This MB dual 82576 SR-IOV unavailable.

BIOS
VT-d enable
SR-IOV enable

ESXi dmesg


2013-03-17T11:15:04.130Z cpu11:4612)<6>igb 0000:01:00.0: eth0: PBA No: 82576B-001
2013-03-17T11:15:04.130Z cpu11:4612)<6>igb 0000:01:00.0: Using MSI-X interrupts. 1 rx queue(s), 1 tx queue(s)
2013-03-17T11:15:04.130Z cpu11:4612)PCI: driver igb claimed device 0000:01:00.0
2013-03-17T11:15:04.131Z cpu11:4612)<6>igb: : igb_validate_option: max_vfs - SR-IOV VF devices set to 7
2013-03-17T11:15:04.131Z cpu11:4612)<4>igb 0000:01:00.1: Failed to initialize SR-IOV virtualization
Darn, i was really hoping passthrough would work :(
 

devioustrap

New Member
Mar 6, 2013
8
0
1
My idle consumption is quite a bit higher than quoted in Patrick's post. Has anybody else tested theres?

I'm running 3 nodes of the 4 nodes.
1: 2 x L5520, 48gb of RAM, 2 SSD, 1 SATA
2: 2 x L5520, 48gb of RAM, 3 SATA
2: 2 x L5520, 24gb of RAM, 3 SATA

It's ~350w accurate? I don't remember exact wattage, but the draw was close to 3.1A. I know I'm running more disks and ram than Patrick, but if this thing truly idles at 174w, I'm using exactly double!
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,514
5,805
113
My idle consumption is quite a bit higher than quoted in Patrick's post. Has anybody else tested theres?

I'm running 3 nodes of the 4 nodes.
1: 2 x L5520, 48gb of RAM, 2 SSD, 1 SATA
2: 2 x L5520, 48gb of RAM, 3 SATA
2: 2 x L5520, 24gb of RAM, 3 SATA

It's ~350w accurate? I don't remember exact wattage, but the draw was close to 3.1A. I know I'm running more disks and ram than Patrick, but if this thing truly idles at 174w, I'm using exactly double!
What power BIOS settings are you using? Do you have a blank in the 4th node spot?

You can see in my Colo post two chassis with six nodes running plus two 24 port switches running on the DC PDU at 5amps.
 

britinpdx

Active Member
Feb 8, 2013
367
184
43
Portland OR
Has anybody else tested theres?
Here's another data point to add to the mix, based on readings from my Kill-a-Watt meter ..

Base C6100 chassis, 2x 1100W redundant PSU

With 1 node
Node 1: 2x L5520, 24GB RAM, 1xSSD 4xSATA; OS Server 2012
Off - 12W
Power Up - 220W peak
Node 1 Idle in Server 2012, start menu - about 140W ( actual current draw indicated by the Kill-A-Watt is 1.24A )

With 2 nodes
Node 1: 2x L5520, 24GB RAM, 1xSSD 4xSATA; OS Server 2012
Node 2: 2x L5520, 24GB RAM, 1xSATA; OS Server 2012

Off - 18W
Power Up - 340W peak
Idle in Server 2012, start menu - about 240W ( actual current draw indicated by the Kill-A-Watt is 2.06A )

So rough math says that adding a second node increased the idle power draw by 100W, or about 0.8A

So a very rough approximation for 6 "similarly configured" nodes running at about 0.8A per node is a around 5A, give or take ...
 

callmeedin

New Member
Mar 24, 2013
2
0
0
So I bought one of these bad boys a couple of weeks ago and have been running ESXi 5.1 on a couple of nodes.
Thanks to the great info in this thread, I decided to rewire the HD setup and just run two nodes with 6 HD each.
On the 1st node I am consolidating the ESXi machines, and plan on using a 2nd node for a ZFS server.

With that said, I think I know what I all need to make this setup work fully, but just wanted to verify it with you guys:

2x 6-sata Dell 6J3R2 harness

2x 05.m Mini SAS (SFF-8087) to 4x SATA

I listed the 1x4 monoprice cables, because I am not sure that the 1x4 cable listed in a previous post (DELL V91FW) will work with the 3.5" backplane, since one of the 4 SATA cables has to be longer to reach the 2nd back of 3 SATA, and looking at the pictures on the linked ebay listing, that doesn't seem to be the case. Plus monoprice has a much better price. :)

Does this sound right, or am I missing something?

Thanks in advance.
 
Last edited:

britinpdx

Active Member
Feb 8, 2013
367
184
43
Portland OR
With that said, I think I know what I all need to make this setup work fully, but just wanted to verify it with you guys:

2x 6-sata Dell 6J3R2 harness

2x 05.m Mini SAS (SFF-8087) to 4x SATA
I also have chosen to setup the C6100 with 2 nodes, 6 drives per node.

If you want to use the 6 SATA ports on the motherboard, then the Dell 6J3R2 harness is a must. ( I purchased mine from the same source ). This will connect the 6 SATA ports on the motherboard to the "interposer board". Having torn apart the C6100 to rework it, I now appreciate the effort and diligence that the Dell engineers put in on this product. The Dell cables are just the right length, just the right fit. You could probably use something else, but IMHO it's not worth the time and effort. Buy the right cables and be done with it.

So the next concern are the mid plane breakouts. The interposer board connects to the mid plane and (due to the 6J3R2 harness), passes all 6 SATA connections along. The mid plane then has the SSF-8087 connector as you indicated, but I'm not sure if the Monoprice cables will work or not. I think they do need to be forward breakout cables, so functionally they should work.
0.5m cables, however, are much longer than needed. Care is needed with the SSF-8087 to SATA cable as it needs to be routed from the mid plane through the fan housing assy and then to the backplane.

For each node, the mid plane has 2x SSF-8087 connectors, one for disks 1~4 and the other for disk 5&6. Dell makes specific cables for the 4 and 2 disk connections. I purchased mine from a fellow forum member (dba) who was parting out his C6100. Unfortunately I've installed them already and didn't make a note of their actual p/n, but the V91FW cable certainly looks like the 4x drive breakout cable.

Per dba in post #84 in this thread ..

Backplane to Midplane breakout cables:
1) Cable for two drives - part number is 334VV.
2) Cable for three drives - part number is 3R7FF or 3T15J
3) Cable for four drives - part number V91FW.
I think that this cable from the same seller on eBay is the cable for the remaining 2 ports (5&6)

Again, the Dell cables are precisely the right length, not too long or too short (hate to go all Goldilocks on you) but they do work. I connected the V91FW cable to one "column" of 3 disks and stretched the 4th connector to the bottom of the other column, then used the 2 port cable for the remaining 3 drives in the next column.

Care should be taken when routing the cables to the backplane. I was a little careless with the cable routing (big hands and fat thumbs), and ended up getting some of the cable labels sticking too far into the fan assy..... startup was a little noisy !!

Not to say that the Monoprice cables won't work, but space is tight behind the backplane and the fan assy.

One thing that I do need to sort out, is that the drive power LEDs / access LEDs only turn on for the column with the 1~3 "effective" SATA connections. The others don't light up, but the drives are recognized and work just fine. Likely to be backplane specific, I'm just not sure what yet.

And BTW, the replacement fan mod identified by PigLover in http:// this thread is certainly worth considering. Fan noise is reduced considerably, but at idle the fans run as low as maybe 1200~1400 rpm which causes a "low fan" alarm to be generated. It's nice and quiet though ;) Next job is to figure out the IPMI commands to reset the low fan threshold, just one more thing to learn !

One reason why I chose to ensure that the 6x SATA connections from the motherboard to the interposer board can be passed through to 6 physical drives, is that I may choose to use a different RAID controller (LSI) over the onboard ICH10R. Theoretically, I think you could even "mix and match" through the interposer board, in that SATA 1&2 could come from the motherboard, and SATA 3~6 could come from something like an LSI 9260-4i. This is pure conjecture at this time, I haven't gotten round to testing it yet, so take this with a grain of salt.
 

RimBlock

Active Member
Sep 18, 2011
837
28
28
Singapore
The Dell C6100 manual is very good, especially with details about the parts and how to take the chassis to bits.

The manual can be found here (pdf).

I needed to go through it as it looks like the left front panel control is not working correctly. I am hoping it is a disconnected cable but may need to take the whole thing apart.

RB
 

Jeggs101

Well-Known Member
Dec 29, 2010
1,529
241
63
There has to be somewhere to get power from though. Driving me nuts.

Also britinpdx
And BTW, the replacement fan mod identified by PigLover in Taming the C6100 thread is certainly worth considering. Fan noise is reduced considerably, but at idle the fans run as low as maybe 1200~1400 rpm which causes a "low fan" alarm to be generated. It's nice and quiet though Next job is to figure out the IPMI commands to reset the low fan threshold, just one more thing to learn !
I think PigLover or someone on here has the commands needed.
 

zer0sum

Well-Known Member
Mar 8, 2013
849
474
63
Have you tried with a standard linux install like ubuntu?
The 82576 cards should do SRIOV
 

devioustrap

New Member
Mar 6, 2013
8
0
1
What power BIOS settings are you using? Do you have a blank in the 4th node spot?

You can see in my Colo post two chassis with six nodes running plus two 24 port switches running on the DC PDU at 5amps.
I have power management set to "OS control". I couldn't really tell the difference between that and node managed, which seems to just enable the intel management stuff. I have a node in the 4th spot as a cold spare, but it's powered off.

With 2 nodes
Node 1: 2x L5520, 24GB RAM, 1xSSD 4xSATA; OS Server 2012
Node 2: 2x L5520, 24GB RAM, 1xSATA; OS Server 2012
Off - 18W
Power Up - 340W peak
Idle in Server 2012, start menu - about 240W ( actual current draw indicated by the Kill-A-Watt is 2.06A )

So rough math says that adding a second node increased the idle power draw by 100W, or about 0.8A
These numbers are bit different from Patrick's, imo, and much closer to mine. If you're using up 240W with two nodes, then me using 350W with 3 (2 of which have double the RAM) is well within reason. Using your math of 0.8A per additional node, each chassis with 3 nodes will use 2.86A. For two equally configured servers, you're at 5.72A and that's not including his two switches.

Not to mention that your 240W idle with two nodes is higher than his quoted idle of 174w with all 4 nodes running!

If Patrick is running two chassis, each with three nodes, he has to be under 2.5A per chassis.
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,514
5,805
113
I have a feeling there is something in the BIOS that we are missing. Doesn't make sense that I can see fairly consistent numbers both on the Extech in the lab and using the third party PDU (and I would assume if anything that PDU would report a bit more since over 5A overages apply.) The ones in the colo do have add-in cards e.g. dual-port NICs in the pfsense nodes.

One other thing I'm wondering is if this is a Windows v. Linux thing. When I get a breather I will try using Windows OSes and see what I find.
 

callmeedin

New Member
Mar 24, 2013
2
0
0
I also have chosen to setup the C6100 with 2 nodes, 6 drives per node.

If you want to use the 6 SATA ports on the motherboard, then the Dell 6J3R2 harness is a must. ( I purchased mine from the same source ). This will connect the 6 SATA ports on the motherboard to the "interposer board". Having torn apart the C6100 to rework it, I now appreciate the effort and diligence that the Dell engineers put in on this product. The Dell cables are just the right length, just the right fit. You could probably use something else, but IMHO it's not worth the time and effort. Buy the right cables and be done with it.

So the next concern are the mid plane breakouts. The interposer board connects to the mid plane and (due to the 6J3R2 harness), passes all 6 SATA connections along. The mid plane then has the SSF-8087 connector as you indicated, but I'm not sure if the Monoprice cables will work or not. I think they do need to be forward breakout cables, so functionally they should work.
0.5m cables, however, are much longer than needed. Care is needed with the SSF-8087 to SATA cable as it needs to be routed from the mid plane through the fan housing assy and then to the backplane.

For each node, the mid plane has 2x SSF-8087 connectors, one for disks 1~4 and the other for disk 5&6. Dell makes specific cables for the 4 and 2 disk connections. I purchased mine from a fellow forum member (dba) who was parting out his C6100. Unfortunately I've installed them already and didn't make a note of their actual p/n, but the V91FW cable certainly looks like the 4x drive breakout cable.

Per dba in post #84 in this thread ..



I think that this cable from the same seller on eBay is the cable for the remaining 2 ports (5&6)

Again, the Dell cables are precisely the right length, not too long or too short (hate to go all Goldilocks on you) but they do work. I connected the V91FW cable to one "column" of 3 disks and stretched the 4th connector to the bottom of the other column, then used the 2 port cable for the remaining 3 drives in the next column.

Care should be taken when routing the cables to the backplane. I was a little careless with the cable routing (big hands and fat thumbs), and ended up getting some of the cable labels sticking too far into the fan assy..... startup was a little noisy !!

Not to say that the Monoprice cables won't work, but space is tight behind the backplane and the fan assy.

One thing that I do need to sort out, is that the drive power LEDs / access LEDs only turn on for the column with the 1~3 "effective" SATA connections. The others don't light up, but the drives are recognized and work just fine. Likely to be backplane specific, I'm just not sure what yet.

And BTW, the replacement fan mod identified by PigLover in http:// this thread is certainly worth considering. Fan noise is reduced considerably, but at idle the fans run as low as maybe 1200~1400 rpm which causes a "low fan" alarm to be generated. It's nice and quiet though ;) Next job is to figure out the IPMI commands to reset the low fan threshold, just one more thing to learn !

One reason why I chose to ensure that the 6x SATA connections from the motherboard to the interposer board can be passed through to 6 physical drives, is that I may choose to use a different RAID controller (LSI) over the onboard ICH10R. Theoretically, I think you could even "mix and match" through the interposer board, in that SATA 1&2 could come from the motherboard, and SATA 3~6 could come from something like an LSI 9260-4i. This is pure conjecture at this time, I haven't gotten round to testing it yet, so take this with a grain of salt.
Thank you for the detailed response. I will definitely go with the original 6-SATA harness. I think I will take my chances with the monoprice 1x4 cables. Cheap enough and can even return if needed. Will try to get the 0.5m of cable nicely organized between the backplane and the fans. While rewiring the original 1x3 wires to only use 2 nodes, I have discovered that freeing the screws that hold the entire HD assembly goes a long way in having a little bit more room to work on the cabling.

One question: Any particular reason (other than space & cable organization) that I can't just reuse the original 1x3 harness and just leave one SATA plug disconnected? That was my plan, rather than buying another 1x2 harness.

As far as the HD LED's. I believe it has to do with the 4-pin SGPIO connector that is part of the original DELL harness. Don't know what to do about it, but as long as the HD are working, seems like a minor issue to me.
 

Smalldog

Member
Mar 18, 2013
62
2
8
Goodyear, AZ
I received two systems today. I just looked up the service tag on Dell's website, and both say they have next business day warranty service through 6/29/2014.

If I had a problem, could I expect Dell to come out to fix it? Looks like both of them were placed into service on 6/29/2011.

Jeff
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,514
5,805
113
Wow! That is interesting. How does that work? I would assume no warranty.
 

dba

Moderator
Feb 20, 2012
1,477
184
63
San Francisco Bay Area, California, USA
Nice! Visit the Dell web site and try to perform a tag transfer for your servers.

I received two systems today. I just looked up the service tag on Dell's website, and both say they have next business day warranty service through 6/29/2014.

If I had a problem, could I expect Dell to come out to fix it? Looks like both of them were placed into service on 6/29/2011.

Jeff
 

Toddh

Member
Jan 30, 2013
122
10
18
I want to mount an Intel SAS expander in the area where a node had been removed. Any ideas on how to get power to the back of the chassis to an empty bay?



.
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,514
5,805
113
I want to mount an Intel SAS expander in the area where a node had been removed. Any ideas on how to get power to the back of the chassis to an empty bay?
Been thinking about this and my "guess" is that you could get power from the harness that supplies power to the motherboard. But I don't know pin-outs and want to disclaim that it is likely a bad idea.