The Dell PowerEdge C6220 Thread

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

JeffroMart

Member
Jun 27, 2014
61
18
8
45
Ya, I got a chance tonight to check it, and it appears to have linked at 6Gb/s here is a screen shot from inside of the 9265 bios, I should also mention it is a C6220 II chassis, so not sure on a v1 system if you would have the same results or not.

 
  • Like
Reactions: Patrick

frogtech

Well-Known Member
Jan 4, 2016
1,482
272
83
35
Thanks for posting back. I learned earlier today after skimming even more owner's manuals that the 6220 II motherboard is more like the 6100: it has ports near the front edge where the node plugs into the midplanes, similar to how the 6100 has an interposer extender. If you use an HBA then you use the ports near the front edge rather than the onboard plugs near the rear I/O. This does enable 6.0 Gbps links. They really dropped the ball on 6220 gen I.
 
  • Like
Reactions: schammy

frogtech

Well-Known Member
Jan 4, 2016
1,482
272
83
35
Has anyone tried a C6220 II node/sled in a known C6220 I chassis? If so is it compatible? There isn't anything different between the two chassis that I know of that should limit it from working besides arbitrary redesigns from Dell.
 

JeffroMart

Member
Jun 27, 2014
61
18
8
45
I tried, as I originally purchased an empty v1 chassis planning to add the nodes later cause I got a great price on it. A month or so later I purchased what was supposed to be just 4 nodes, they ended up shipping 4 nodes + the v2 chassis. I had some issues with the v2 chassis they sent and during troubleshooting the issues with the v2 system I tried putting the v2 nodes in the v1 chassis, and they all posted with an amber light instead of a green on the power/system status led. The chassis also ramped all of the fans to 100% all the time.

The v2 chassis has all green leds with fans ramped up during power on, then drops once the BMC comes online as expected. Hope that helps!
 

frogtech

Well-Known Member
Jan 4, 2016
1,482
272
83
35
Good to know! I just have one other question, is there anything about the v2 chassis you see that can make it identifiable from the v1 chassis?
 

JeffroMart

Member
Jun 27, 2014
61
18
8
45
Not that I noticed, but I wasn't really looking for it at the time. The main reason I even noticed was based on when the BIOS posted it showed C6220 II on the nodes. I'll check the physcial chassis if I remember tomorrow, the service tag would be the other way to tell if you can get it
 
D

Deleted member 7760

Guest
I will soon be getting a C6220 and I would like to install a mezzanine card, but the cards I am looking at it do not come with the bridge board. I have a few questions and hope that somebody might be able to help:
1) Will mezzanine cards from the C6100 work? Has anybody here tried one?
2) Is the bridge board from the C6100 compatible with the C6220? In particular, is it compatible with PCI-e 3, i.e. will it work with a PCI-e 3 mezzanine card, such as the Infiniband card?

Thanks for any help!

Edit: I found that the bridge board for the C6220 has part number HH4P1. I am still wondering though if this is an equivalent part number for the one in the C6100 or if it needs to be different for PCI-e 3 support.

I believe the answer to my more general mezzanine cross-compatibility question is that most will work, but need different brackets, and possibly different bridge boards. I am not certain about this and would welcome any comments, but I noted that e.g. the X53DF 10Gbe controller appears to be available for both with the same part number.

Another edit: I have now tested a C6100 bridge board (part number JKM5M) and it seems to work with with a PCI-e 3 ConnectX-3 Infininband Mezzanine card on a C6220 II.
I also found that this link demonstrating that the bridge board from a C6220 (part number HH4P1) works fine on a C6100: Modifying the Dell C6100 for 10GbE Mezz Cards
 
Last edited by a moderator:
D

Deleted member 7760

Guest
I now also have a question about backplanes: does anybody know if a C6100 direct backplane will work in a C6220 II? I want to take out the RAID card to use the slot for something else, but I have an expander backplane and I think that won't work with the motherboard SATA controller, so I am looking for a cheap replacement.
Thanks.
 

Jb boin

Member
May 26, 2016
49
16
8
36
Grenoble, France
www.phpnet.org
We have two C6220 (not II) with LSI 9265-8i (one filled with 2*E5-2680 with 128Gb of RAM on each nodes and the other with 2*E5-2670 with 96Gb of RAM each) that we bought recently and we are experiencing huge issues with SATA SSDs : Disks drops out of RAID (sometimes 3 or 4 disks at once which can be catastrophic...).

Its easier to trigger these drops by doing a reboot or when there is an important load on the RAID but sometimes it just drops one or many disks at once even when on the LSI Webbios not doing any reconstructions or so...

It seems that some SSDs are less prone to be dropped than others and some of the enclosure slots are also less prone to drop disks than others :
  • We didnt had any issues so far (about 2 months) with 6*Sandisk Extreme Pro 960Gb on the #1 sled of our first C6220
  • No issues no far (1 week) with 4*Samsung 850 Pro 480Gb on the #4 sled of our first C6220 (but there were drops on the #2 and #3 sleds of the same chassis...)
  • Tested Dell branded Seagate Savvio 10k.6 300Gb SAS disks on the second chassis for more than a week without any issue so far
  • Tested 4 and 6*Sandisk Extreme Pro 480Gb on the sleds #2, #3 and #4 of the first chassis and it was experiencing drops in all cases (even more in #2 and #3 where is couldnt even boot at all with the 6 disks detected at the same time)
  • Tested with the Sandisk Extreme Pro 480Gb on the other C6220, we had issues as well
  • Tested 2, 4 and 6*Samsung 850 Evo 250Gb on the second C6220 on every sleds and its dropping regularily


Here is what we tried so far without luck :
  • Both 2.5.3 and 2.7.1 BIOS (C6220 II bios is compatible with C6220 even if not stated clearly on the website)
  • The LSI card initially had firmwares from 2013, we upgraded to the latest one without any difference
  • Forcing the link speed to 3Gbps instead of the default 6Gbps
  • Exchanging sleds where enclosure slots seems more prone to drops with another one which was behaving better... the behavior didnt change so the problem probably midplane or backplane related
  • Verified that all cables were correctly plugged, unplugged and replugged them

On the RAID card log i have many of these :
Event Description: Unexpected sense: PD 12(e0xfc/s1) Path 4433221102000000, CDB: 28 00 00 19 d6 00 00 02 00 00, Sense: b/47/03​

Then when a disk disconnects :
Event Description: PD 12(e0xfc/s1) Path 4433221102000000 reset (Type 03)
Event Description: Removed: PD 12(e0xfc/s1)
Event Description: Removed: PD 12(e0xfc/s1) Info: enclPd=fc, scsiType=0, portMap=02, sasAddr=4433221102000000,0000000000000000​


I found an old topic with similar issues with 840 Pro but it had been solved with firmware upgrades if i understood correctly : Should be sticky: Samsung 840 and 840 pro are not LSI megaraid compatible


Does anyone had any issues such as this one?

Buying Dell SSD is not a viable solution as their price is way too high, the servers seller tells us that he can sell us these SSD models that should be compatible while not too expensive : "Intel 335 SSDSC2CT240A4" and "Adata XPG SX900 ASX900S7-256G".

Do you think that its a viable solution?
 

Renat

Member
Jun 8, 2016
57
19
8
41
Can someone confirm that all "Power Distribution Board HDD Backplane" cables for C6100 and C6220 generation are the same and have the same length:
C6100 to 3.5 - 4HVFG or FCJ56
C6100 to 2.5 - 3YH92
C6220 to 3.5 - 7YR8G
C6220 to 2.5 - VG8JT
 
Last edited:

frogtech

Well-Known Member
Jan 4, 2016
1,482
272
83
35
Anyone have any luck with these with an HBA and SATA6G SSD's?
What "luck" are you hoping to find? C6220 does not support 6G speeds unless you come up with a clever way of running really long break out cables from the rear of the node directly to the backplane. C6220 II is a different story, it has native 6G support. This is due to how Dell designed the nodes.
 

c6100

Member
Oct 22, 2013
163
1
18
USA
I am looking at a C6220 which does not have the cut outs for the 10gbps card in the back. Does anyone know if the metal slot can be removed and replaced with one that is already properly slotted coming from a C6100?
 

c6100

Member
Oct 22, 2013
163
1
18
USA
Thanks for posting back. I learned earlier today after skimming even more owner's manuals that the 6220 II motherboard is more like the 6100: it has ports near the front edge where the node plugs into the midplanes, similar to how the 6100 has an interposer extender. If you use an HBA then you use the ports near the front edge rather than the onboard plugs near the rear I/O. This does enable 6.0 Gbps links. They really dropped the ball on 6220 gen I.
Any recommendation on which cable is best for 3 x SATA to SAS for a HBA?
 

c6100

Member
Oct 22, 2013
163
1
18
USA
Hi guys.

Can anyone that successfully installed a TCK99 in their C6220s enlighten me to the model of the bridge card? I tried using the C6100 but it seems that it a little too long so it raises the height of each node to a level where it bumps into the chassis or the node above. Such information is extremely elusive online.

Cheers,
Josh

Josh - Did you ever figure out the proper bridge card? I am going to be in the same boat. I just assumed it would fit.
 

josh

Active Member
Oct 21, 2013
615
190
43
Josh - Did you ever figure out the proper bridge card? I am going to be in the same boat. I just assumed it would fit.
No. I ended up sending back the full mesh to the seller to swap with 4 pieces of the ones with holes. It's one whole piece. The bigger problem was that the cards didn't line up even with the new mesh. They kinda stick upwards. Works fine for the top 2 nodes but will have some problems sliding in the bottom 2.
 
  • Like
Reactions: c6100

c6100

Member
Oct 22, 2013
163
1
18
USA
No. I ended up sending back the full mesh to the seller to swap with 4 pieces of the ones with holes. It's one whole piece. The bigger problem was that the cards didn't line up even with the new mesh. They kinda stick upwards. Works fine for the top 2 nodes but will have some problems sliding in the bottom 2.
To clarify, the mesh with holes from the c6100 would not work with the c6220?

Sent from my Nexus 6P using Tapatalk