The Dell PowerEdge C6220 Thread

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Patrick

Administrator
Staff member
Dec 21, 2010
12,511
5,792
113
As of today, I finally got a Dell PowerEdge C6220. Like the Dell C6100 XS23-TY3 2U 4-Node (8 CPU) Cloud Server thread before it, I am going to make a mini-reference guide to the server. These are quite a bit more than the C6100's were when I bought that first one since the processors are still $120-185 for the E5-2670 and this chassis can take 8 of them (I was getting the C6100's for $1000 loaded a few years ago.) Also it did not come with drive trays. A bummer since I got rid of a stack of them about 6 months ago.

These will cost more but here is a video describing the new C6220 upgrade over the C6100 generation.

Note: this is going to take awhile to build out but I am going to mostly keep the outline of the C6100 thread.

Getting a Dell PowerEdge C6220 inexpensively
I purchased the disk less unit for $1100 + 100 Shipping. The auction was listed as having dual SFP+ 10Gb ports but it instead came with Mellanox single port ConnectX-3 FDR cards. Possibly an upgrade but I do not have enough QSFP ports for them. So I ended up buying additional SFP+ cards on ebay.

Here is the ebay search for the Dell PowerEdge C6220

Dell PowerEdge C6220 Basic Specs
  • Total Nodes: 4
  • Total Power Supplies: 2
  • CPU Support: Dual Intel Xeon E5-2600 V1 (V2 is supported by the C6220 II generation)
  • DDR3 DIMM Support: 2 DIMMs per channel, 8 DIMMs per CPU, 16 DIMMs per node up to 512GB: 4GB/8GB/16GB/32GB LV DDR3 RDIMM (1333MT/s); 4GB/8GB/16GB (1600MT/s 1.5v/1.35v) DDR3 RDIMM
  • Chassis drive support: 24x 2.5" or 12x 3.5" per chassis, (each node has 6 or 3 drives respectively.)

Dell PowerEdge C6220 Power
There are two power supplies for 4x total nodes. That means there are 1/4th as many power supplies in the chassis. The 1400w power supplies appear to be 200V and higher only. We will have to test these in our Sunnyvale Datacenter lab.

2 Nodes active (with mezzanine cards) idle in OS: 265w
2 Nodes active (with mezzaning cards) idle in PXE boot loop: 380w
Max thus far: 1020w

Dell PowerEdge C6220 Internals


Dell PowerEdge C6220 Chassis
There were two main chassis options, one with 12x 3.5" disks and one with 24x 2.5" disks.

Dell PowerEdge C6220 Management
The sticker on the Dell C6220 motherboards we have says Avocent so we would expect this to be more like traditional Dell servers and those from Gigabyte.

Default IPMI Username and Password: root / root
upload_2015-11-14_13-54-39.png

Dell PowerEdge C6220 Sound
Measurements thus far on the Extech NIST calibrated sound meter:
Idle
100% Load Across 8x Intel Xeon E5-2670:

Dell PowerEdge C6220 Expansion
Each Dell C6220 node comes with a low profile PCIe 2.0 x16 slot on a riser.

There is also a PCIe x8 mezzanine card slot with Infiniband, 10Gb SFP+ and LSI controller options.

Dell PowerEdge C6220 Performance

Dell PowerEdge C6220 Drivers
Here is a link to current Dell drivers. Most components are very standard.

Dell PowerEdge C6220 BIOS
From what I can see there is a sticker for InsydeH2O UEFI BIOS for Mobile, Desktop, Embedded and Server Platforms - InsydeH2O which is a UEFI BIOS.

Dell C6100 XS23-TY3 Spare Parts List
Rackmount Rail Kit - 0Y3DX1
Mid-plane Chassis Fan
1.1kw PSU -
Motherboard Tray - D61XP
Mellanox dual port QDR Infiniband Daughter Card - JR3P1
Mellanox ConnectX-3 FDR Infiniband/ Ethernet Mezzanine Card (single port): 0T483W
SAS Bridge Card - JKM5M
Intel 82599 10GbE Daughter Card - TCK99
3.5" Drive Sled - 8TV68
2.5" Drive Tray - D273R
Heatsinks - 0NGDCM
 
Last edited:

dba

Moderator
Feb 20, 2012
1,477
184
63
San Francisco Bay Area, California, USA
I know that there are several different power supplies available for these units:
  • Your old C6100 or C6105 power supply, if you use lower power CPUs
  • ??? - 100-240V 1200W 80 PLUS PLATINUM. Also called PS-2112-5D
  • 4CG41 - 1400 Watt version on the Dell web site
  • RN0HH - 200-240V 1400W 80 Plus version. Older I believe
  • Y53VG - 200-240V 1400W Green sticker.
  • ?? - 208-277v rumored special version ultra high efficiency
The above information is gathered from reliable sources, but not directly verified by me. My C6220 has Y53VG supplies.

As of today, I finally got a Dell PowerEdge C6220. Like the Dell C6100 XS23-TY3 2U 4-Node (8 CPU) Cloud Server thread before it, I am going to make a mini-reference guide to the server. These are quite a bit more than the C6100's were when I bought that first one since the processors are still $120-185 for the E5-2670 and this chassis can take 8 of them (I was getting the C6100's for $1000 loaded a few years ago.) Also it did not come with drive trays. A bummer since I got rid of a stack of them about 6 months ago.

...

Dell PowerEdge C6220 Power
There are two power supplies for 4x total nodes. That means there are 1/4th as many power supplies in the chassis. The 1400w power supplies appear to be 200V and higher only. We will have to test these in our Sunnyvale Datacenter lab.

...
 
  • Like
Reactions: Chuntzu

PnoT

Active Member
Mar 1, 2015
650
162
43
Texas
I've been keeping my eye on these for a bit as I miss my C6100 and also because they'd be ideal for a Storage Spaces Direct test bed all in a single chassis. The single port ConnectX-3 is a let down thou as 2 ports would have enabled IB / ETH combinations with ease but if I want both the PCI slot would have to be tied up. :mad::(

The 2U sleds look interesting as they provide more PCI slots but I was unsure on whether you could put say 1 2U height sled in and 2 other 1U sleds into 1 chassis?
 
Last edited:

Patrick

Administrator
Staff member
Dec 21, 2010
12,511
5,792
113
Well I still had 4x C6100 drive trays. Hope I can source 8x 3.5" ones locally tomorrow. Good news is they seem to be the same.

The PSUs this one has are 80+ Platinum DP/N 0CN35N 200-240V 1400W.
 

PnoT

Active Member
Mar 1, 2015
650
162
43
Texas
I'm running across something very interesting in the technical manual http://site.pc-wholesale.com/manuals/Dell-PowerEdge-C6220.pdf

The storage section refers to how the connections from rear to front, for storage, are through the board now which reduces clutter among other things but what is odd is that there is no mezz or add-in card that states it will run faster than 3Gbps. All of the cards in the list are capable of 6Gbps but in parenthesis state (support only at 3Gbps). Do you think that's something to do with the wiring or is that the real deal because if the entire chassis is hardwired for 3Gbps that's a deal breaker for me at least. I'd rather not have to route cables through the nodes and into the backplane even if that's possible still considering there's an expander now.
 

Dajinn

Active Member
Jun 2, 2015
512
78
28
33
I'm running across something very interesting in the technical manual http://site.pc-wholesale.com/manuals/Dell-PowerEdge-C6220.pdf

The storage section refers to how the connections from rear to front, for storage, are through the board now which reduces clutter among other things but what is odd is that there is no mezz or add-in card that states it will run faster than 3Gbps. All of the cards in the list are capable of 6Gbps but in parenthesis state (support only at 3Gbps). Do you think that's something to do with the wiring or is that the real deal because if the entire chassis is hardwired for 3Gbps that's a deal breaker for me at least. I'd rather not have to route cables through the nodes and into the backplane even if that's possible still considering there's an expander now.
That is interesting especially for a 2011 platform. I did find this in a google: Seems someone else had a similar "issue" with the fact.

Dell c6220 with lsi sas 2008 mezzanine cards
 

PnoT

Active Member
Mar 1, 2015
650
162
43
Texas
I was initially thinking about buying one of these but the more research into them the less appealing. I'm a bit confused at why dell had the 6220 with only supporting V1 CPUs then immediately released the V2 (C6220 II) that only enabled V2 chips and faster RAM. That's not much of a leap from 2 generations of servers to be honest and not that much from the C6100. They also state these are PCI 3.0 but they're not either because only the mezz slot is 3.0 and the rest is 2.0 which is meh.

I now see why a lot of people are going with some of the Intel based 2U 4 node servers.
 

JeffroMart

Member
Jun 27, 2014
61
18
8
45
I picked up an empty chassis C6220 for $80 on ebay last week with included power supplies, so I thought why not. With just the empty chassis I can populate the nodes later, but have a lower upfront cost, but will probably end up costing more in the long term than just buying it with the nodes already installed which I am OK with.

That being said, when they switched from v1 to v2 of the system, did it just change the nodes/sleds or did it also change the chassis as well? Would like to be a little informed before buying a v2 node and not having it work in the chassis.

Anyone have part numbers on the various nodes? Looks maybe the v1 node is TTH1R?

Based on: DELL POWEREDGE C6220 NODE MOTHERBOARD MICRO CLOUD SERVER ASSEMBLY - PART TTH1R
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,511
5,792
113
When we had the C6100 wave happen, I got an "anonymous" tip from someone familiar with the C6xxx program (I think that is the standard way to say it right?) Rumor has it they did something with the C6220 II backplanes as well. I have not had the hardware to confirm.
 

lmk

Member
Dec 11, 2013
128
20
18
When we had the C6100 wave happen, I got an "anonymous" tip from someone familiar with the C6xxx program (I think that is the standard way to say it right?) Rumor has it they did something with the C6220 II backplanes as well. I have not had the hardware to confirm.
I remember that one of those new models allows for dynamic allocation of drives to nodes, so that may be the reason.

However, I cannot recall which version (6220 or 6220 V2) it was released/started with.
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,511
5,792
113
I remember that one of those new models allows for dynamic allocation of drives to nodes, so that may be the reason.

However, I cannot recall which version (6220 or 6220 V2) it was released/started with.
It was with the C6220 II.
 

Sielbear

Member
Aug 27, 2013
31
2
8
I'm looking at upgrading our C6100 to the C6200. We would setup the old C6100 as a replication destination (node-to-node) with the C6220 servers. Based on our workloads, I'm strongly considering a couple of SSDs in the configuration - RAID 1 only for redundancy. I know with the C6100, the 1086e card did NOT see SSDs if they were connected, and consequently, SSD performance would drop substantially after the drive is filled as TRIM, etc. is never passed.

My question is - if I'm looking at SSDs (considering a 250 GB for boot volume and 1 TB for VM storage), how would you connect that? With just RAID 1, there is no parity to worry about / calculate, so I'm actually tempted to run with the embedded SATA controllers.

What are your thoughts on this?
 
Last edited:

Patrick

Administrator
Staff member
Dec 21, 2010
12,511
5,792
113
So what I did on my 3.5" nodes was:
SATA III 1 - SSD1
SATA III 2 - SSD2
SATA II 1 - HDD1

The Intel 602 PCH if you are using SATA SSDs works well. You just do not have enough 6gbps ports because that was the generation where Intel added a 3gbps SAS controller on the PCH.
 

Sielbear

Member
Aug 27, 2013
31
2
8
Thanks, Patrick. I had read that the board had 2 x sata 3 ports and 4 x sata 2, but I couldn't find independent verification.

And am I correct that I'll get the speed consistency as TRIM will be supported if I put the SSDs in a RAID 1 on the onboard SATA controller?

Thanks again for your help! Looking to pull the trigger on this today or tomorrow.
 

Nemon

New Member
Oct 20, 2013
10
0
1
Patrick,

thank you on update.

Do you maybe have part number for rails - for Dell PowerEdge C6220 ? I could not find it my self.

Please let me know!

Thank you
 

PnoT

Active Member
Mar 1, 2015
650
162
43
Texas
Has anyone tried a 6Gbps mezz card and verified it will only do 3Gbs on these boards? The technical specifications say the max is 3Gbps but I find that hard to believe. If the mezz will do 6Gbps and I can throw in a PCI ConnectX-3 that would get me the storage and both 10GB / 56GB connectivity.
 

Sielbear

Member
Aug 27, 2013
31
2
8
Turns out the model c6220 I purchased has the LSI card installed. I'm hoping to get Windows installed tomorrow and I'll check connection speed.
 
  • Like
Reactions: PnoT

schammy

New Member
Jan 2, 2016
11
3
3
45
Hi everyone, I just bought 2 of the c6220 II's off eBay (24x 2.5" version). They seem absolutely perfect for my needs. However, I can't get the BIOS to see any of the drives I put in it.

I've tried normal spinning drives as well as SSDs. The tray light comes on when I power on the appropriate node but the BIOS never shows any drives connected, nor when I go to install Debian Linux does it see any drives. I've done this on 2 nodes in one of the chassis and 1 node in the others, same result every time.

There is no LSI raid card in here. As far as I know it should just be the Intel C600 SATA controller running this thing.

I've tested both "AHCI" and "RAID" in the SATA area and no changes.

Speaking of the RAID setting... I'm going to be using Linux software RAID (mdadm), but supposing I did want to use the Intel RAID, where in the heck do I configure that? I tried going into the Intel boot agent or whatever it's called but that's only for PXE which I'm not using. That's the only option other than entering the BIOS before it tries to boot.