The problem with that is the other chips with heatsinks. They rely on the passive airflow to cool as well. So you'd have to get a set of 30/40mm fans to cool those, and they have a tendency to be loud in the high-pitch range.i always thought i would get a unit with just two sleds and put a 4u super micro HSF on the blades though that wouldn't fix the delta fans blowing like howling witches
I tried. .Or flash the pic18 and let us know what happened...
Thanks for the help.
I only have two node unit and came configured each one is allocated 6 drive slots.Hi, this thread has been helpful so far but I'm confused on running less than 4 nodes. The manual shows the drive bay configuration, based on how many nodes are used. I'm only using 3 nodes, which is 1-2-4. The manual says I should get 4 drives each, going horizontally. However, the nodes still recognize only 3 drives each, going down vertically.
How do I tell the system I'm only using nodes 1-2-4? Is it a setting, or a jumper somewhere? Anyone else tried this?
In researching 6100 I found an article that I linked on the STH Resources -> guides section. It described the process to modify the PE 2950 BIOS after changing the fans to make that quieterMy C6100 experience.
I wanted to upgrade my home lab. I am a consultant and run a variety of systems in a pretty extensive home lab. MS 2012 R2 domain, ESXi cluster, Hyper-V, Cisco, and iSCSI storage via Synology RS812 platforms. I was using Dell 1950 servers for a lot of this, so I now have three of them for sale,,,just sayin.
I recently purchased a DCS (no valid Dell service tag at Dell support) C6100 from a seller on Ebay. Final agreed price was $700, which included 4 500GB drives as well. The C6100 (3.5" format) has 4 nodes, dual L5520s, (only 60W power) and 48 GB in each of the nodes and dual power supplies. I pushed the seller to update BIOS and BMC to the latest versions before shipping. (BIOS at 1.71, BMC at 1.33).
I added the network cards to three of the nodes that I intended to use as ESX nodes. (DELL NetXtreme II Dual Port Gigabit PCIe Network Interface Card G218C). When I installed these, the top of the metal bracket extended too far into the airflow for my liking and I used tin snips to cut the metal bracket right at the top of the PCB. The network cards don't actually attach to anything other than the PCI riser card and associated metal bracket, but they are surprising steady and quite small so no issues with the lack of the bracket. (Total cost for three network cards, $30). ESX had no issue with these cards and I run both a VDS and a standard switch in each ESX host connected via LAGs to my core 1Gb switch.
I then ordered four San Ace 80 Fan 9G0812P1F041 80x80x38mm 5-pin DC12V 0.58A fans. I found out that my FCB was a PIC-16 board when I tried to update the FCB firmware, so no FCB update was done. My original fans were quite loud, as reported by numerous other users, however they did quiet down after initial configuration of all four nodes IPMI and the nodes themselves. At one point I reseated a node and was greeted with a rapid reduction in fan speed (filed under I DUNNO). My original fans ALMOST got as quiet as the fans on my existing Dell 2950. I received the new fans and did the fan mod using the documentation found on this site. One suggestion I would make is to cut the existing fan wiring harness as close to the original fans as comfortable. This alleviates having to re-run the wiring harnesses for the fans. Just keep in mind that you have to have enough wire to do the stripping and soldering function. I chose to use heat shrink over each of my connections and everything fit very nicely after the mod was completed. (Sorry no pictures taken). One additional note on cooling. My C6100 came with disk tray blanks in nine of the twelve sites. These blanks actual hard a big piece of plastic, roughly the size if a 3.5" HDD installed in them. I removed the plastic and then re-inserted the blanks to allow for more unrestricted airflow since I would not be using HDDs in any of the ESXi hosts drive slots. My CPU temps hover around 45 to 50C across all four nodes. My fans continually run at 3600 RPM and the server is now quieter than my existing 2950.
Next I ordered 5 Sandisk 8GB Cruzer Fit USB 2.0 Flash Drive Memory Sticks. The idea was to have the ESX hosts boot from the USB sticks. The best way to create these is to use the VMware workstation creation process. (Installing ESXi to a USB key using VMware Workstation 11 – Updated « Everything Virtual) I am using ESXi 6.0U1 using the custom Dell ISO. (Total cost for USB sticks, $19). Don't forget to configure the "scratch" location to shared storage or your ESXi hosts will always complain about lack of persistent log storage. A couple of notes, make sure you select USB HISPEED in the BIOS or your boot up of the ESX hosts will take a long time. vSphere 6.0 runs fine. I run a four node cluster (one existing Dell 2950), and everything works fine, vMotion, DRS, etc. I have had no issues to date. The C6100 IPMIs also allow for DPM to be implemented if desired to control power consumption
The fourth node was installed as a Microsoft 2012R2 server running Hyper-V. No issues with install. Hyper-V runs great and you can use the RAID function in the built in ICH10 controller with Server 2012R2. Using three of the 500GB drives I received I created two RAID 5 volumes, a ~200GB for the OS and ~730GB for the Hyper-V machines. I run a total of 5 Hyper-V VMs (for now) with no issues.
All of this took about a month of my off time. The result is very much worth the effort. I usually run with the 2950 and one of C6100 nodes powered up, sometimes two of the C6100 nodes. My power bill has gone down, the window AC in my office where these are racked runs less frequently and I have excess capacity for any future lab project. (Hint, NSX).
I will be glad to answer any questions that you may have, in my spare time. I hope this is helpful to you.