Dell PowerEdge C8220 Build and Questions

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

falanger

Member
Jan 30, 2018
71
16
8
40
Question: I see on your monitor screen that your BIOS shows the option ["edge slot" <enabled>] under "PCI configuration". Was this "edge slot"- option visible already before you connected a GPU card, or did this option only become visible in the BIOS after you connected the GPU card to the edge slot?
The Edge slot option is visible after connecting an additional +12 V power supply and a card to the GPGPU riser. Without an installed card, the slot is not detected. And the system only sees 2 GPU cards 16x. The third - no, but she sees the 4x SSD card. On 3 16x cards, the system lacks signal lines.
 
  • Like
Reactions: jverschoor

jverschoor

New Member
Mar 12, 2021
15
0
1
The Edge slot option is visible after connecting an additional +12 V power supply and a card to the GPGPU riser. Without an installed card, the slot is not detected. And the system only sees 2 GPU cards 16x. The third - no, but she sees the 4x SSD card. On 3 16x cards, the system lacks signal lines.
Thanks for you reply Falanger; very helpful.

I have one follow-up question if you don't mind: You say " The Edge slot option is visible after connecting an additional +12 V power supply...".

Question: where did you connect this additional 12V power supply to?

Is it to connector 23 "power connector interposer"? (as highlighted in yellow in the picture attached.... as I see in your videos that you have indeed connected a power cable to this 4 pin connector, but I don't know what this connector powers).

See, we have 4 C6220ii nodes in a C6000 chassis and the 4 nodes get their power supply from the so-called "middle plane" to the rear. (So we don't use power connector 1 (the main one) nor power connector 2 (the small 4-pin one)).

This leaves me wondering whether we need to hook up an additional power supply to power connector 2 in order for the K80 to show up when connected to the edge slot (the rear interposer PCIe slot)..... (as maybe that 4-pin power connector 2 supplies the 75W to that PCIe slot (the 225W to the 8 pin power connector on the K80 we will supply separately)).

I would appreciate if you could share you thinking.

Kind regards, JJ
 

Attachments

falanger

Member
Jan 30, 2018
71
16
8
40
On GPU risers from Dell servers such as yours, on boards with a 16x slot nearby, there is a 4-pin + 12V power connector that feeds it directly to the card.
4-pin connector on the GPU riser board - supplies + 12V power directly to this 16x slot. In order not to load the power lines of the board and not to damage it.
A small 4-pin connector near the memory slots supplies + 12V power to the board to ensure its operation if powerful processors and a lot of memory modules are installed, more than half of the total.
The purpose of the power pins of the connectors is on the forum, it does not coincide on the board and risers. Themselves such connectors are sold on eBay.
 

falanger

Member
Jan 30, 2018
71
16
8
40
Faced a problem, my Dell C6220 II board cannot turn on RTX 3060 12gb, older cards are working. Who faced and solved this problem?
 

falanger

Member
Jan 30, 2018
71
16
8
40
By experience I found that GT 710, 730, 1030, 1080, HD 6450 work on my 6220 board. But GTX 1650 and 3060 do not.
 

jverschoor

New Member
Mar 12, 2021
15
0
1
Which PCIe slot did you try the RTX 3060 on?: One of the front two PCIe slots, or the rear edge slot via the GPGPU riser?

Actually, either way, I wont know the answer. But the board was designed to work with the older Nvidia Tesla/Intel Phi cards.... so maybe only the older Fermi/Kepler architectures are supported, whilst the newer Pascal and Ampere architectures (employed by GTX and RTX series respectively) are not? (I don't know about Radeon)
 

falanger

Member
Jan 30, 2018
71
16
8
40
Empirically, I installed a video card in the store by sticking it into the card that it works with GTX 710, 730, 1030, GTX 1060, 1070. But with GTX 1650, RTX 2060 and 3060 - no. Keep this in mind when assembling your workstation.
 

jverschoor

New Member
Mar 12, 2021
15
0
1
Ok thanks, I will keep that in mind.... and within that regard, did you actually ever manage to run a K80 on the rear edge slot GPGPU riser? you told me about the overall PCIe lane limitation (that prevents running 3 GPU's), but did you ever manage to get a single K80 to run on the rear edge slot GPGPU riser?

Kind regards, JJ
 

yourepicfailure

New Member
Jul 23, 2019
7
2
3
My Dell has no problem with a Gigabyte RTX2080 Super:
rtxworks.png
I am actually using it right now to type this message. It is currently the primary display adapter.
works.png

EDIT: In fact, at one point I had both the 2080 Super AND an Asus Rog 2080TI 11G installed.
 
Last edited:

jverschoor

New Member
Mar 12, 2021
15
0
1
Hi, ok, that is promising, but did you install those GPU's in the front 2 PCIe ports or did you use the powered riser on the rear edge slot to connect one of the GPU's?
 

jverschoor

New Member
Mar 12, 2021
15
0
1
Ok, thanks for letting me know.
This weekend I will try to connect a K80 to the rear GPGPU slot via the riser ribbon cable and I will post back as to whether that works or not, as I believe it didn't work for Falanger, but Drabadue managed to get a GTX 780 to function on the rear GPGPU slot, and the K80 is nearly the same architecture as the GTX 780.... so I am hoping it will work.

Kind regards, JJ
 

jverschoor

New Member
Mar 12, 2021
15
0
1
Just in case anybody has any networking issues with the onboard NICs, please note that Dell has finally (29 March 2021) updated their firmware to version 20.0.16:

You can download it here: https://dl.dell.com/FOLDER07032145M/3/Network_Firmware_23WP1_WN64_20.0.16_A00.EXE
release notes here: https://dl.dell.com/FOLDER07032138M/1/fw_release.txt

Note: the executable works on Windows only but, its just a self-extracting archive of some repackaged intel firmware, that you can also manually extract using commands like this: "Network_Firmware_23WP1_WN64_20.0.16_A00.EXE /s /e=C:\MakeATempDirectory", and then you can use the embedded intel firmware updater in that directory like this: "fitw64e.exe -u -l -c fit.cfg" from an elevated command prompt (you may need to hack the fit.cfg and package.xml to match your NIC's, but do so carefully as we bricked a mobo by trying to crossflash the "i350 LOM" (for which there was no firmware update) to "i350-t LOM" (for which there was a firmware update)).
 

jverschoor

New Member
Mar 12, 2021
15
0
1
Hi,

As an update: we tried to install the K80's this weekend, however, to no avail: the PCIe edge slot isn't recognized in BIOS, whilst the GPU is connected with auxiliary power and riser is powered as well.

I think this is because our BIOS (version 2.10) is for the C6220ii (that doesn't normally support GPU's) rather than the C8220 (that normally does support GPU's). However - as that seems straightforward - we had asked Dell explicitly whether they had disabled that PCIe slot in the C6220ii BIOS, and they claimed they hadn't.
And when I saw in Falanger his videos that his BIOS screen indicated a C6220 model (so it looked like he had flashed a C6220ii BIOS) and that the Edge slot did pop up in his BIOS, I believed that Dell indeed hadn't disabled the PCIe slot and we went ahead with the modifications, however it isn't working: the edge slot wont pop up in BIOS

Now, our Mobo is 09N44V, whilst Falanger his Mobo is 083N0, but I dont think that makes any difference. Think it is just BIOS related, but we can't cross flash to the C8220 BIOS, as we would likely lose the interaction with the C6000 chassis.
 
Last edited:

jabuzzard

Member
Mar 22, 2021
58
24
8
I think this is because our BIOS (version 2.10) is for the C6220ii (that doesn't normally support GPU's) rather than the C8220 (that normally does support GPU's). However - as that seems straightforward - we had asked Dell explicitly whether they had disabled that PCIe slot in the C6220ii BIOS, and they claimed they hadn't.
Wrong the C6100, C6220 and C6220II do support GPU's. The standard way this was supposed to work is you fit an iPass card in the server and then your GPU is in a C410x PCIe expansion chassis thing. They are however somewhat finicky, and if a GPU gets in a stuck state you usually have to power cycle the whole C410x to get it working again.

The C6100 however won't run a modern GPU, aka no P100 or V100. There doesn't seem to be enough BAR space and no way to increase it in the BIOS. On the other hand the C6220 will run everything right up to a V100. However you will need to make your own power cables. That's what happens when an academic produces a bunch of GPU cards and asks if we can host them from them. Hoping to retire them shortly and return the nodes to the undergraduate HPC cluster.

I have not tried it with an A100 as we ended up getting a really good deal on some SR670 from Lenovo with A100's in, basically the SR670 was free. Though I can confirm an A100 runs just fine in a PowerEdge R730 but expect to do some metal work drilling new holes for the support brackets and make your power cables again.
 

jverschoor

New Member
Mar 12, 2021
15
0
1
Hi Jabuzzard,

Indeed the C6220ii works with an HIC card to a C410x, but we are trying to connect a GPU directly to the edge PCIe slot, which slot is normally only used in a C8220, and not in a C6220ii.... that's were the BIOS comment comes from.

Kind regards, JJ
 

jabuzzard

Member
Mar 22, 2021
58
24
8
The HIC/iPass card makes no difference as to whether the card will work from a BIOS perspective. It's just like a very long PCIe riser. Admittedly the C410x has PCIe switch chip inside, but they also work just fine with an older NVIDIA thing which does not have the PCIe switch chip as well.

The most likely issue is to be around powering the card. Unless you have made your own lead to power the GPU card it is unlikely to work with anything modern. NVIDIA has keeps changing the pin out on those connectors, so the stock GPU aux power lead that was for something like a Tesla M2075, just won't work on a K80, and for certain does not work on a P100 onwards.

You very likely need to spend time studying the relevant board specification documents from NVIDIA and making your own leads up. I have an interesting collection of them for different cards in different Dell systems. If you don't have the proper crimp tool you can use some fine needle nose pliers and then a small amount of solder as crimping it with pliers does not make a very good connection.
 

jverschoor

New Member
Mar 12, 2021
15
0
1
Hi Jabuzzard,

Thanks for your reply: Indeed, bios-wise the GPU's are supported; most in this thread managed to get multiple GPU's to function on the first 2 PCIe slots, and some also managed to get some GPU's to function on the rear PCIe edge slot via a riser connector.

Problem we are having is that we cannot get the rear PCIe Edge slot to become visible in the BIOS and we suspect the reason for this is that Dell deliberately disabled this PCIe Edge slot in BIOS version v2.10 for the C6220ii, as it normally isn't possible to connect the riser connector to the Edge slot in the C6000 chassis, whilst it is possible in the C8000 chassis (As an analogy: in the 12-bay version of the R510 Dell disabled the SATA ports at BIOS level as they didn't think anyone could connect a SATA drive to it (due to space considerations), however along came satadoms which did allow use of those SATA ports, yet Dell refused to re-enable the SATA ports in BIOS).

For this reason we specifically asked Dell beforehand whether they had disabled that Edge PCIe slot at BIOS level, and Dell claimed they hadn't, and Falanger his videos also show the Edge port popping up in his BIOS, whilst Falanger also appears to have flashed a C6220ii BIOS version.

So we don't really understand what our issue is, but the chance that we made a mistake with the power connectors is very small, as it just isn't that difficult: we just powered the riser as per how it should be (2 x 12V, and 2 x ground) and powered the K80 using an official K80 NVIDIA dongle (that converts "6 pin PCIe + 8 pin PCIe" to 8-pin CPU power) and our power budget is ample (1200W for 2 K80's + fans): it can't possibly be underpowered and the green indicator light on the K80's does switch on. So we have no idea why the Edge slot just wont pop up in our BIOS.

I will be at the data center this weekend and will make some photographs, but any ideas you have are much appreciated.

Kind regards, JJ
 
Last edited:

bellamou

New Member
Jul 4, 2024
2
0
1
Did anyone ever figure out the onboard SATA for these, is there some way to have them work without having to plug in a backplane (or other additional cards) or is it not possible?
Have you succeed to make these onboard SATA ports working without addintional cards or plugs, mean direct connection to HDD ?, Thanks
 

bellamou

New Member
Jul 4, 2024
2
0
1
Hi, may be this thread is out of day !, but i have a couple of questions regarding DIY C6220 V1 nodes.

1/ On C6220 I there are only 2 x onboard SATA ports in the rear , can we use these ports without any extra expension card or cable routing or special bios setup ?, hdd are not dectected on these ports when trying to install OS from bootable usb

2/ system board has two pcie slots Gen2x16 ( none mezanine ), can both be used in same time ?,

3/ if adding graphic card is there a special slot to use ?, special powering to system board , as only the mini 18 pins connector is powered ?

Thanks
Regards