all good! thanks for the offer. If I can't find anyone I'll resort to ebay, but even the chinese sellers of the stock fan model want like 15 dollars per fan, yikesSorry, most people posting in this thread are interested in getting it cool, while also keeping it quiet. I'd offer you my fans, but I hacked the fan headers off of them, so are basically useless now. :sorry:
I cut the wires to hookup san ace 1.1A fans. I didn't throw the original ones out. I should have them in garage somewhere. If you cant find a reasonably priced set, i can send them for the cost shipping.all good! thanks for the offer. If I can't find anyone I'll resort to ebay, but even the chinese sellers of the stock fan model want like 15 dollars per fan, yikes
You might consider going on eBay and picking up dual L5520 CPUs, or possibly dual L5630 CPUs if your boards will support it. Lower power usage and will run cooler too. Costs will be pretty small, as these CPUs are going for less than $5 each.For reference, my CPU's are dual E5530's, so not the coolest of the bunch...
Right... that's the PIC16 board, the PIC18 FCB is quite different. Still hackable but more complicated. So yeah, for anyone with the older systems like me the adapters work like a champ with a little modding.Those adapters would only work for C6100 that have the older fan controller board, I tried to do a mod like this a while back on a non-DCS C6100 and the fans used some kind of 7 pin connector.
Good thoughts... yeah I'll probably do some hunting around when I have some time this weekend. Given my workloads are pretty small (couple of small VM's and some Docker containers) I haven't been overly concerned about power. This is already better than my i7-950 that ran my previous lab... which at full tilt burned some watts. I still have that 950 running as I migrate services to the 6100. You're right though, while a single blade is about equal for power to my 950-based system, two blades is more expensive on power.You might consider going on eBay and picking up dual L5520 CPUs, or possibly dual L5630 CPUs if your boards will support it. Lower power usage and will run cooler too. Costs will be pretty small, as these CPUs are going for less than $5 each.
I purchased the Noctua's with the extension cable, and I flipped the pins on the extension cable. This way it worked without issue, but also left the fans in stock form so I could re-use them in the future. I just labeled the extension as re-pinned so I would know the future not to use it as a stock fan connector.I realise that this is a very old thread at this point, but I was wondering if anyone had any luck finding a 7Pin for the PIC18 to 4 pin adapter cable? My searching hasn't been able to find any (currently thinking I'll be going with some Noctua fans, but might go for something else), and I don;t really want to start cutting up he fancy I've got in there currently if I can avoid it.
Do create a FS thread for this.Wanted to post this here before I go out to Ebay. You might remember my original post. (Reposted below).
My C6100 is ready for new home. Specs are below. The fourth sled is a little shaky. No BMC, the power button doesn't light and the video is barely usable, but once it boots it's all good. The other three no issue. All four boot from USB to ESX 6.5 (around update 2, don't remember exactly.) I'll leave the USBs with the sleds. No onboard disk but three trays.
Price is very negotiable to anyone on this forum. If you are local to central Texas even better.
"My C6100 experience.
I wanted to upgrade my home lab. I am a consultant and run a variety of systems in a pretty extensive home lab. MS 2012 R2 domain, ESXi cluster, Hyper-V, Cisco, and iSCSI storage via Synology RS812 platforms. I was using Dell 1950 servers for a lot of this, so I now have three of them for sale,,,just sayin.
I recently purchased a DCS (no valid Dell service tag at Dell support) C6100 from a seller on Ebay. Final agreed price was $700, which included 4 500GB drives as well. The C6100 (3.5" format) has 4 nodes, dual L5520s, (only 60W power) and 48 GB in each of the nodes and dual power supplies. I pushed the seller to update BIOS and BMC to the latest versions before shipping. (BIOS at 1.71, BMC at 1.33).
I added the network cards to three of the nodes that I intended to use as ESX nodes. (DELL NetXtreme II Dual Port Gigabit PCIe Network Interface Card G218C). When I installed these, the top of the metal bracket extended too far into the airflow for my liking and I used tin snips to cut the metal bracket right at the top of the PCB. The network cards don't actually attach to anything other than the PCI riser card and associated metal bracket, but they are surprising steady and quite small so no issues with the lack of the bracket. (Total cost for three network cards, $30). ESX had no issue with these cards and I run both a VDS and a standard switch in each ESX host connected via LAGs to my core 1Gb switch."