Taming the C6100

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

trousers

New Member
Nov 6, 2015
1
0
1
112
I use 2 nodes and changed out the fans. I had some extra aluminum that is used for fascia on my house, so I bent it to fit in the upper nodes to force the air through the heatsinks.



WP_20151015_006.jpg WP_20151015_007.jpg
 

spyrule

Active Member
i always thought i would get a unit with just two sleds and put a 4u super micro HSF on the blades though that wouldn't fix the delta fans blowing like howling witches
The problem with that is the other chips with heatsinks. They rely on the passive airflow to cool as well. So you'd have to get a set of 30/40mm fans to cool those, and they have a tendency to be loud in the high-pitch range.
 

nthu9280

Well-Known Member
Feb 3, 2016
1,628
498
83
San Antonio, TX
Hi Y'all :)

Wish I'd seen this thread before. I recently bought a C6100 XS23-TY3 4x Quad Core E5530 from fleabay for home lab for a good price. It's only been few days and I'm about to give up. I can hear this thing in my kitchen on the first floor. My 5G WIFI has problems reaching there. To make matters worse, the FCB is PIC16 with 1.04FW.

I tried KCSFLASH utility since I can't find FCBFLASH 0.4 version. It reported success but the PIC FW still shows 0104 in Bios. I was brave enough to flash the BIOS to 1.71 & BMC 1.33 though the service tag was not on Dell's support site. They worked fine.

./kcsflash FCB [-p DataPort] [-q CommandPort] <filename>

Eg : ./kcsflash FCB file.bin (use default port number 0xCA2, 0xCA3)
./kcsflash FCB -p 0xCA8 -q 0xCAC file.bin (It will use port number 0xCA8, 0xCAC)

I only used the default option as I didn't want to brick the FCB.

My request:
Wondering if anyone has any luck in updating PIC16 FCB and control the fan speeds. I only have two nodes now likely will not additional nodes.
Does anyone have a spare PIC18 FCB for sale?

Thanks
 

MRose34

New Member
Jun 29, 2015
15
0
1
51
Truthfully you'd probably be better off just swapping the fans. I have mine sitting at work and under my desk less than 2 feet from me and I can talk on phone and have conversations with people standing near and far and the server is very quite. The server is hardly noticeable and it does not over heat. As long as you have decent temp from A/C it will be fine. This forum is really old and nobody has said they were able to control fans yet so I would not hold out hope. Just swap fans and be done with it.
 

nthu9280

Well-Known Member
Feb 3, 2016
1,628
498
83
San Antonio, TX
Or flash the pic18 and let us know what happened... :p
I tried. :).
Fcbflash refused since it's a pic16 board asked me to use the older version (0.4) of the tool. I scoured the internet but couldn't find any older than 0.7.

Kcsflash is supposed to handle pic16. It says finished the FW flash. But the BIOS and BMC tool still show the FW 0104. I'll give it a world with the other parameters since it's still only less than a week. :)
 

nthu9280

Well-Known Member
Feb 3, 2016
1,628
498
83
San Antonio, TX
I have no idea if any of these are the correct version, but it's all of the copies I have:
Dropbox - kcsflash.zip
Thanks for the help.
I'm actually looking for older version of FCBFlash. I have the one from 1.15 and it is v0.7

"==========User Guild=========="
"This flash tool used for upgrade FCB FW ( Pic18)"
"Note that: The pic18 FW must have header."
"If you want to flash pre-version FW without header, please use pre-version tool such as V0.4."
"<-t> Flash Fan Speed Control table.");
"<-o> Use OEM command. Defaut: Master Write_r command");
"The following is an example, FYI..........");
"FCBflash <bin-filename> means all.bin");
"FCBflash <-o> <bin-filename> means use OEM command to flash all.bin");
"FCBflash <-t> <bin-filename> means fsc.bin");
 

nthu9280

Well-Known Member
Feb 3, 2016
1,628
498
83
San Antonio, TX
Not sure if this is the correct thread to ask this question. I can't seem to slide the nodes out of my C6100. Per my understanding all I need to do is unscrew the latch, Press and hold the latch and slide out the node. I can do that with the empty filler but not the nodes. Do I need to unscrew anything inside?
 

Dale McKay

New Member
Feb 17, 2016
4
1
3
65
My C6100 experience.

I wanted to upgrade my home lab. I am a consultant and run a variety of systems in a pretty extensive home lab. MS 2012 R2 domain, ESXi cluster, Hyper-V, Cisco, and iSCSI storage via Synology RS812 platforms. I was using Dell 1950 servers for a lot of this, so I now have three of them for sale,,,just sayin.

I recently purchased a DCS (no valid Dell service tag at Dell support) C6100 from a seller on Ebay. Final agreed price was $700, which included 4 500GB drives as well. The C6100 (3.5" format) has 4 nodes, dual L5520s, (only 60W power) and 48 GB in each of the nodes and dual power supplies. I pushed the seller to update BIOS and BMC to the latest versions before shipping. (BIOS at 1.71, BMC at 1.33).

I added the network cards to three of the nodes that I intended to use as ESX nodes. (DELL NetXtreme II Dual Port Gigabit PCIe Network Interface Card G218C). When I installed these, the top of the metal bracket extended too far into the airflow for my liking and I used tin snips to cut the metal bracket right at the top of the PCB. The network cards don't actually attach to anything other than the PCI riser card and associated metal bracket, but they are surprising steady and quite small so no issues with the lack of the bracket. (Total cost for three network cards, $30). ESX had no issue with these cards and I run both a VDS and a standard switch in each ESX host connected via LAGs to my core 1Gb switch.

I then ordered four San Ace 80 Fan 9G0812P1F041 80x80x38mm 5-pin DC12V 0.58A fans. I found out that my FCB was a PIC-16 board when I tried to update the FCB firmware, so no FCB update was done. My original fans were quite loud, as reported by numerous other users, however they did quiet down after initial configuration of all four nodes IPMI and the nodes themselves. At one point I reseated a node and was greeted with a rapid reduction in fan speed (filed under I DUNNO). My original fans ALMOST got as quiet as the fans on my existing Dell 2950. I received the new fans and did the fan mod using the documentation found on this site. One suggestion I would make is to cut the existing fan wiring harness as close to the original fans as comfortable. This alleviates having to re-run the wiring harnesses for the fans. Just keep in mind that you have to have enough wire to do the stripping and soldering function. I chose to use heat shrink over each of my connections and everything fit very nicely after the mod was completed. (Sorry no pictures taken). One additional note on cooling. My C6100 came with disk tray blanks in nine of the twelve sites. These blanks actual hard a big piece of plastic, roughly the size if a 3.5" HDD installed in them. I removed the plastic and then re-inserted the blanks to allow for more unrestricted airflow since I would not be using HDDs in any of the ESXi hosts drive slots. My CPU temps hover around 45 to 50C across all four nodes. My fans continually run at 3600 RPM and the server is now quieter than my existing 2950.

Next I ordered 5 Sandisk 8GB Cruzer Fit USB 2.0 Flash Drive Memory Sticks. The idea was to have the ESX hosts boot from the USB sticks. The best way to create these is to use the VMware workstation creation process. (Installing ESXi to a USB key using VMware Workstation 11 – Updated « Everything Virtual) I am using ESXi 6.0U1 using the custom Dell ISO. (Total cost for USB sticks, $19). Don't forget to configure the "scratch" location to shared storage or your ESXi hosts will always complain about lack of persistent log storage. A couple of notes, make sure you select USB HISPEED in the BIOS or your boot up of the ESX hosts will take a long time. vSphere 6.0 runs fine. I run a four node cluster (one existing Dell 2950), and everything works fine, vMotion, DRS, etc. I have had no issues to date. The C6100 IPMIs also allow for DPM to be implemented if desired to control power consumption

The fourth node was installed as a Microsoft 2012R2 server running Hyper-V. No issues with install. Hyper-V runs great and you can use the RAID function in the built in ICH10 controller with Server 2012R2. Using three of the 500GB drives I received I created two RAID 5 volumes, a ~200GB for the OS and ~730GB for the Hyper-V machines. I run a total of 5 Hyper-V VMs (for now) with no issues.

All of this took about a month of my off time. The result is very much worth the effort. I usually run with the 2950 and one of C6100 nodes powered up, sometimes two of the C6100 nodes. My power bill has gone down, the window AC in my office where these are racked runs less frequently and I have excess capacity for any future lab project. (Hint, NSX).

I will be glad to answer any questions that you may have, in my spare time. I hope this is helpful to you.
 
  • Like
Reactions: jstaple2

Dale McKay

New Member
Feb 17, 2016
4
1
3
65
If you have PCI cards installed check to make sure the bottom of the bracket isn't extending too far to allow the node to slide in and out.
 

parapsychotic

New Member
Apr 2, 2016
4
0
1
35
Hi, this thread has been helpful so far but I'm confused on running less than 4 nodes. The manual shows the drive bay configuration, based on how many nodes are used. I'm only using 3 nodes, which is 1-2-4. The manual says I should get 4 drives each, going horizontally. However, the nodes still recognize only 3 drives each, going down vertically.

How do I tell the system I'm only using nodes 1-2-4? Is it a setting, or a jumper somewhere? Anyone else tried this?
 

nthu9280

Well-Known Member
Feb 3, 2016
1,628
498
83
San Antonio, TX
Hi, this thread has been helpful so far but I'm confused on running less than 4 nodes. The manual shows the drive bay configuration, based on how many nodes are used. I'm only using 3 nodes, which is 1-2-4. The manual says I should get 4 drives each, going horizontally. However, the nodes still recognize only 3 drives each, going down vertically.

How do I tell the system I'm only using nodes 1-2-4? Is it a setting, or a jumper somewhere? Anyone else tried this?
I only have two node unit and came configured each one is allocated 6 drive slots.

You will have to reconfigure the Reverse Break-out cables from the mid-plane to HS back plane.
 

nthu9280

Well-Known Member
Feb 3, 2016
1,628
498
83
San Antonio, TX
My C6100 experience.

I wanted to upgrade my home lab. I am a consultant and run a variety of systems in a pretty extensive home lab. MS 2012 R2 domain, ESXi cluster, Hyper-V, Cisco, and iSCSI storage via Synology RS812 platforms. I was using Dell 1950 servers for a lot of this, so I now have three of them for sale,,,just sayin.

I recently purchased a DCS (no valid Dell service tag at Dell support) C6100 from a seller on Ebay. Final agreed price was $700, which included 4 500GB drives as well. The C6100 (3.5" format) has 4 nodes, dual L5520s, (only 60W power) and 48 GB in each of the nodes and dual power supplies. I pushed the seller to update BIOS and BMC to the latest versions before shipping. (BIOS at 1.71, BMC at 1.33).

I added the network cards to three of the nodes that I intended to use as ESX nodes. (DELL NetXtreme II Dual Port Gigabit PCIe Network Interface Card G218C). When I installed these, the top of the metal bracket extended too far into the airflow for my liking and I used tin snips to cut the metal bracket right at the top of the PCB. The network cards don't actually attach to anything other than the PCI riser card and associated metal bracket, but they are surprising steady and quite small so no issues with the lack of the bracket. (Total cost for three network cards, $30). ESX had no issue with these cards and I run both a VDS and a standard switch in each ESX host connected via LAGs to my core 1Gb switch.

I then ordered four San Ace 80 Fan 9G0812P1F041 80x80x38mm 5-pin DC12V 0.58A fans. I found out that my FCB was a PIC-16 board when I tried to update the FCB firmware, so no FCB update was done. My original fans were quite loud, as reported by numerous other users, however they did quiet down after initial configuration of all four nodes IPMI and the nodes themselves. At one point I reseated a node and was greeted with a rapid reduction in fan speed (filed under I DUNNO). My original fans ALMOST got as quiet as the fans on my existing Dell 2950. I received the new fans and did the fan mod using the documentation found on this site. One suggestion I would make is to cut the existing fan wiring harness as close to the original fans as comfortable. This alleviates having to re-run the wiring harnesses for the fans. Just keep in mind that you have to have enough wire to do the stripping and soldering function. I chose to use heat shrink over each of my connections and everything fit very nicely after the mod was completed. (Sorry no pictures taken). One additional note on cooling. My C6100 came with disk tray blanks in nine of the twelve sites. These blanks actual hard a big piece of plastic, roughly the size if a 3.5" HDD installed in them. I removed the plastic and then re-inserted the blanks to allow for more unrestricted airflow since I would not be using HDDs in any of the ESXi hosts drive slots. My CPU temps hover around 45 to 50C across all four nodes. My fans continually run at 3600 RPM and the server is now quieter than my existing 2950.

Next I ordered 5 Sandisk 8GB Cruzer Fit USB 2.0 Flash Drive Memory Sticks. The idea was to have the ESX hosts boot from the USB sticks. The best way to create these is to use the VMware workstation creation process. (Installing ESXi to a USB key using VMware Workstation 11 – Updated « Everything Virtual) I am using ESXi 6.0U1 using the custom Dell ISO. (Total cost for USB sticks, $19). Don't forget to configure the "scratch" location to shared storage or your ESXi hosts will always complain about lack of persistent log storage. A couple of notes, make sure you select USB HISPEED in the BIOS or your boot up of the ESX hosts will take a long time. vSphere 6.0 runs fine. I run a four node cluster (one existing Dell 2950), and everything works fine, vMotion, DRS, etc. I have had no issues to date. The C6100 IPMIs also allow for DPM to be implemented if desired to control power consumption

The fourth node was installed as a Microsoft 2012R2 server running Hyper-V. No issues with install. Hyper-V runs great and you can use the RAID function in the built in ICH10 controller with Server 2012R2. Using three of the 500GB drives I received I created two RAID 5 volumes, a ~200GB for the OS and ~730GB for the Hyper-V machines. I run a total of 5 Hyper-V VMs (for now) with no issues.

All of this took about a month of my off time. The result is very much worth the effort. I usually run with the 2950 and one of C6100 nodes powered up, sometimes two of the C6100 nodes. My power bill has gone down, the window AC in my office where these are racked runs less frequently and I have excess capacity for any future lab project. (Hint, NSX).

I will be glad to answer any questions that you may have, in my spare time. I hope this is helpful to you.
In researching 6100 I found an article that I linked on the STH Resources -> guides section. It described the process to modify the PE 2950 BIOS after changing the fans to make that quieter
 

parapsychotic

New Member
Apr 2, 2016
4
0
1
35
Wow, I'm reading up on this more and found discussions about changing the break-out cables. I guess I would never have thought that, since the manual made it look was automatic, but now I understand it's not and how the cables work. I wanted 3 nodes with 4 drives each for RAID 10. But......Reading up even more, looks like the builtin "RAID" is fake-raid and VMware doesn't recognize it. So many "gotchas" so far.
 

parapsychotic

New Member
Apr 2, 2016
4
0
1
35
I didn't see a clear answer on this. Does anyone know?

My C6100 fans have a 5 pin connector, but only four wires: black, red, blue, green. Any ideas what blue and green are?
 

PigLover

Moderator
Jan 26, 2011
3,184
1,545
113
I don't recall which is which, but the extra wires are an additional +12 and ground from the redundant PSU (i.e., there is a +12 from each PSU and a ground from each psu).