The Dell PowerEdge C6220 Thread

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

c6100

Member
Oct 22, 2013
163
1
18
USA
Absolutely. I'm running ESX 6.0 and getting near wirespeed. BIOS update?

-D
Question for you, when you run "ethtool -i vmnic2" from the CLI of one of your hosts? What do you see? I do not see a firmware version. Do you?

driver: ixgbe
version: 4.4.1-iov
firmware-version: 0x00000000
bus-info: 0000:82:00.0
 

diobgh1

New Member
Jan 24, 2017
28
5
3
42
Is the C6220 a genuine Dell device?
I ask because I have a couple of C1100 which are (apparently) Quanta units, and having just had one die on me in a fail condition that isn't specified in the manual (Fault light solid on), I'm now a little nervous about these "low price no frills" servers from Dell, who I would normally trust very highly for reliability.

Another weirdness with my C1100 is I can't seem to properly fit a PCIE card, the riser is just too far away from the bracket, so to fit a card I had to remove the bracket - highly unusual.

When you buy a VW, you know you're not getting an Audi, but you know you benefit from the engineering expertise/resources that goes into the Audi because it's the same company. An old banger with a VW log glued on it is a whole different proposition.

I should point out I buy all my hardware 2nd hand out of service contracts, etc.
 
Last edited:

dwright1542

Active Member
Dec 26, 2015
377
73
28
50
Is the C6220 a genuine Dell device?
I ask because I have a couple of C1100 which are (apparently) Quanta units, and having just had one die on me in a fail condition that isn't specified in the manual (Fault light solid on), I'm now a little nervous about these "low price no frills" servers from Dell, who I would normally trust very highly for reliability.

Another weirdness with my C1100 is I can't seem to properly fit a PCIE card, the riser is just too far away from the bracket, so to fit a card I had to remove the bracket - highly unusual.

When you buy a VW, you know you're not getting an Audi, but you know you benefit from the engineering expertise/resources that goes into the Audi because it's the same company. An old banger with a VW log glued on it is a whole different proposition.

I should point out I buy all my hardware 2nd hand out of service contracts, etc.
The C series sure is Dell, as long as you find service tags, but made by Quanta. It's a small department, but it's North American, and those guys know their stuff. No hassle parts exchanges, etc. I've also got 1100's and 2100's running ESX all over the place, although their 5 year warranties are coming due. Some of the best machines I've bought for the price.
 

GabrielHm

New Member
Feb 14, 2017
3
1
1
39
The SAS port on the motherboard is weird, it's more of like a passthrough, if you notice on the C6220 node, it has no interposer unlike the C6100 node which has the node connected to a small daughterboard that has SATA ports on it. The node connector 'passes through' the connections. This is inherently one of the big flaws with the system and why you're limited to only 2 SATA3 and 4 SATA2 speeds.
Indeed, thanks for telling me. That spared me a few hours running tests and co.

I just got the fancy 60 bay JBOD and LSI 2008 mezzanine card, but still missing the SAS cables.

I'm still unclear if what I'm trying to do even makes sense ; I'm trying to plug the mezzanine card into the PCI-E Gen3 x8 mezzanine slot (not the internal SAS mezzanine one, and I don't intend to plug it to the backplane through the motherboard) in order to connect the JBOD to it. I'd like to use the 60 bay JBOD as shared storage, while keeping the C6220 internal SATA drives connected and running.

I guess it will not work, as those mezzanine cards are controllers, not HBA. Is it as stupid as it sounds ?
 

frogtech

Well-Known Member
Jan 4, 2016
1,482
272
83
35
Indeed, thanks for telling me. That spared me a few hours running tests and co.

I just got the fancy 60 bay JBOD and LSI 2008 mezzanine card, but still missing the SAS cables.

I'm still unclear if what I'm trying to do even makes sense ; I'm trying to plug the mezzanine card into the PCI-E Gen3 x8 mezzanine slot (not the internal SAS mezzanine one, and I don't intend to plug it to the backplane through the motherboard) in order to connect the JBOD to it. I'd like to use the 60 bay JBOD as shared storage, while keeping the C6220 internal SATA drives connected and running.

I guess it will not work, as those mezzanine cards are controllers, not HBA. Is it as stupid as it sounds ?
You mean an external SAS HBA into the standard low profile PCI-e slot? You can still do this.
 

GabrielHm

New Member
Feb 14, 2017
3
1
1
39
You mean an external SAS HBA into the standard low profile PCI-e slot? You can still do this.
My standard low-profile x16 PCI port is already used ; all my nodes are connected to a C410X through a HIC card :


So I was planning to use the LSI 2008 mezzanine card instead.

Finally, it seems to be working : the drives in the JBOD are detected during boot, and the C6220 internal drives as well.
Haven't been past Linux bootscreen yet, though : the node freeze after something like 1mn. Overheating I guess.
 
  • Like
Reactions: Joshua Handrich

vrod

Active Member
Jan 18, 2015
241
43
28
31
Been reading through this discussion for the last 30 minutes. There's still some things I hope people know as I'm considering getting a box like this myself.

- Does both the Gen I and II of the C6220 support PCIe 3.0? I'd like to supply each of mine with a smaller but fast NVMe ssd
- Is there an intergrated SATA controller? I saw some folks mention the C602 PCH so i would suppose so. Just planning to have 2 nodes with 6 wd reds each and the other 2 just a NVMe.
- Are these things stable? Looks like there's been a few bug reports in this topic, some severe with disks dropping out of RAIDs. Or did dell/quanata make a stable bios now?
- I see 2 pcie slots on the board, does that mean you can fit a mezzanine sfp+ card in addition to a PCIe device?

Cheers!
 

frogtech

Well-Known Member
Jan 4, 2016
1,482
272
83
35
Been reading through this discussion for the last 30 minutes. There's still some things I hope people know as I'm considering getting a box like this myself.

- Does both the Gen I and II of the C6220 support PCIe 3.0? I'd like to supply each of mine with a smaller but fast NVMe ssd
- Is there an intergrated SATA controller? I saw some folks mention the C602 PCH so i would suppose so. Just planning to have 2 nodes with 6 wd reds each and the other 2 just a NVMe.
- Are these things stable? Looks like there's been a few bug reports in this topic, some severe with disks dropping out of RAIDs. Or did dell/quanata make a stable bios now?
- I see 2 pcie slots on the board, does that mean you can fit a mezzanine sfp+ card in addition to a PCIe device?

Cheers!
Not sure why they have the 2 PCIe slots but it's the same as the C6100 to my knowledge, one PCIe and one mezzanine expansion card.

I'm not sure your plan will work. The C6220 doesn't have dynamic drive allocation, the interface that the node plugs into is like some PCIe interface that carries the data to a board in front of the power distribution area which has all the sas connectors. So basically you're limited on how you want to allocate drives. Look up the hardware guide for more info.

Just get a C6100 the system design just makes way more sense lol and is functionally better.

Edit: to further clarify, the C6220 uses 2 midplane with 4 sas ports each. That's 8 drives a node. But the ports are assigned statically to the node. You can't reconfigure unless you have a C6220 II. Even if you get the chassis and use gen 1 nodes I'm not sure it will work.

Sent from my SM-N930F using Tapatalk
 
Last edited:

Bogdan Daja

New Member
Sep 30, 2016
3
0
1
53
I need some info about part no of the cable from middle plane to sas backplane of the C6220 with 4 nods.
I but 1 server with 4 nodes but only 3 node see drives and 1 (Node 3) don't see any disk because is missing cable (my guess is was 3 node server 1 node was add b4 to deliver to me w/o any checks).
In order to work as i plan i have to change all 3 cable and buy 1 more.
Can anyone help me with the part no or something to can looking for order them?
Thank You
Bogdan
 

Bogdan Daja

New Member
Sep 30, 2016
3
0
1
53
Hi
First of all w/o any sata on board you wont get more than 3G SATA (sas drive wont work)
Second you need SAS controller on PCIe or Mezzanine.
My setup is to use 4 node with mezzanine fill with 2x10G and PCIe with 6G SAS Controller to use with 1ssd and 2sas all 6g .
Btw i use 12x3,5 configuration so no need to worry about drop of disk from raid (is about 24x2,5") .
I hope the info help you.
Been reading through this discussion for the last 30 minutes. There's still some things I hope people know as I'm considering getting a box like this myself.

- Does both the Gen I and II of the C6220 support PCIe 3.0? I'd like to supply each of mine with a smaller but fast NVMe ssd
- Is there an intergrated SATA controller? I saw some folks mention the C602 PCH so i would suppose so. Just planning to have 2 nodes with 6 wd reds each and the other 2 just a NVMe.
- Are these things stable? Looks like there's been a few bug reports in this topic, some severe with disks dropping out of RAIDs. Or did dell/quanata make a stable bios now?
- I see 2 pcie slots on the board, does that mean you can fit a mezzanine sfp+ card in addition to a PCIe device?

Cheers!
 

vrod

Active Member
Jan 18, 2015
241
43
28
31
Hi
First of all w/o any sata on board you wont get more than 3G SATA (sas drive wont work)
Second you need SAS controller on PCIe or Mezzanine.
My setup is to use 4 node with mezzanine fill with 2x10G and PCIe with 6G SAS Controller to use with 1ssd and 2sas all 6g .
Btw i use 12x3,5 configuration so no need to worry about drop of disk from raid (is about 24x2,5") .
I hope the info help you.
I guess I just gotta split my reds up then. :p won't matter too much, but since I will only be getting 5U of rackspace, I need to find a home for these drives somehow since they currently live in s normal 2U chassis. :)
 

diobgh1

New Member
Jan 24, 2017
28
5
3
42
Thought I'd update on my research. So in contrast to the C1100/C2100, the C6220 motherboard is actually Dell branded with dell logo on the board, so I'm suspecting this is closer to a proper Dell product than the C1100/C2100.

Would also be interested to know what cables are required for adding a node seperately. It seems that 3-node servers that are much better price/node than 4-node servers. A vague plan is to get the 3-node, buy an extra node, and then put together. Following Patrick's plan of fitting a low power CPU in the 4th node.
Which cables are required for fitting a node? I don't care about SAS, am happy with SATA 3G, etc.
 

nbritton

New Member
Nov 19, 2016
26
17
3
45
I am considering a C6220, I read the manual someone linked earlier (http://site.pc-wholesale.com/manuals/Dell-PowerEdge-C6220.pdf)

It seems like the SATA connectors are NOT SATA III? (but SATA II instead) and the PCIe is 2.0 only?
The Radon mainboard is PCIe 3.0 and has a Patsburg-C1 PCH, so it has four SATA II ports (MiniSAS0 connector) and two SATA III ports (SATA4 & 5 connector). The mezzanine board slot is PCIe x8, the PCIe riser slots are x16, and the PCIe edge connector in the rear is x16.
 
Last edited:

c6100

Member
Oct 22, 2013
163
1
18
USA
Absolutely. I'm running ESX 6.0 and getting near wirespeed. BIOS update?

-D
I tried using the Dell live CD on two of the hosts and did receive line speed. It appears the issue is at the hypervisor level. Do you have SR-IOV enabled in the BIOS?

Sent from my Nexus 6P using Tapatalk
 

frogtech

Well-Known Member
Jan 4, 2016
1,482
272
83
35
No. The gen 1 chassis isn't designed for that.

Sent from my LG-H811 using Tapatalk