Dell 3-Node AMD DCS6005

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

gmac715

Member
Feb 16, 2014
37
0
6
Thank you very much Ken and MiniKnight for your thoughts and feedback. It has been very helpful.

First, Ken, your advise to refer to the inside panel of the chasis cover and your drive bay configuration response helped me to nail down the issue. My drive bays are configured as you reported your drive bays are configured.

1-1, 1-2, 1-3, 1-4
2-1, 2-2, 2-3, 2-4
3-1, 3-2, 3-3, 3-4

This drive bay configuration is of'course much different than the drive bay layout diagram in the Dell C6105 Hardware Owner's manual.

After testing all of the bays and cross-referencing with the HARD DRIVES detected in the BOOT section of the BIOS, I discovered that 2 of the 3 drives are detectable and that the other drive is not detectable in any bay. Therefore, my issue is resolved but I am awaiting my 4 other drives to arrive (2 x 1TB and 2 x 320GB) so I will have more drives to work with and test.

I initially tried to find as much documentation on the internet as I could find; however, it wasn't much and required a lot of digging. Dell doesn't recognize the Dell Service Tag number on the chasis. Apparently, there have been a few different configurations for the PowerEdge C6105 cloud server since its inception in 2009. I believe the servers we all acquired on Ebay to be the very earliest model. Mine is configured with the Tyan S8208 system board (which I have been able to find a spec sheet for but no real documentation from Dell on the server) and the Opteron 2419 EE processors.

Some links I used include (outside of this forum of'course):
http://www.tyan.com/datasheets/d_S8208.pdf
TYAN - Download Manuals: TYAN (S8208)
TYAN - Download Drivers: TYAN (S8208)
TYAN S8208
PowerEdge C6105 Rack Server Product Details | Dell

Although I gave it a lot of thought, I didn't switch the BIOS from ACHI to RAID yet because I felt my issue was somewhere between the drive bay configuration and the drives themselves.

Thank you again for the great information.

Once the HDDs arrive and I finish out the drive bay testing (and documenting it) I will try to upgrade my esxi 5.1 to 5.5 because of the 32GB RAM limit on the free version of esxi 5.1. Esxi 5.5 RAM limit in the free version is 64GB. I also read that many of the ACHI drivers included in the esxi 5.1 were left out in 5.5 and many people have had problems with their drives not being recognized in 5.5. I also read that if you upgrade from 5.1 to 5.5 then the ACHI drivers would be retained so I will give it a go.
 

gmac715

Member
Feb 16, 2014
37
0
6
"Is your system a C6005 like mine or a C6100/C6105 as you describe it?"

I actually copied that from the Ebay listing. Yes, I do believe my server is the same as yours.

3-node chassis with Tyan S8208 MB, 48GB DDR2 PC-5300 667 ECC RAM, 3 x 250GB HDDs Seagate ES Baracuda, 12 x 3.5inch HDD Bays

I actually purchased a 2nd C6105 server on Ebay that doesn't have any RAM or HDDs. All servers eventually need spare/replacement parts (smile) and the BIOS in one of my server nodes (blades) is very corrupt and displays unreadable fuzz on the screen when I try to enter the bios. The boot ability is totally corrupt so I will use one of the blades in the spare server to switch out.

I actually found a site that I could purchase a new BIOS chip from: Tyan S8208 Bios-Chip24.com
My concern there was the BMC software. I wonder if this BIOS chip would have the most recent firmware and the IPI (I believe) management software included. Anyway, I erred on the side of purchasing a spare server.

What brand are your HDDs?
 
Last edited:

totalanni1134

New Member
Feb 17, 2014
2
0
0
I have a C6105 on the way.
Its my first multiple node system.
Could anyone explain to me how the remote management works on these units?
Its using TYAN S8208 boards.
 

TangoWhiskey9

Active Member
Jun 28, 2013
402
59
28
I download Supermicro IPMI view and use the discover feature to find the IPMI IP addresses. Then I point a browser to that address. Super ez.
 

MiniKnight

Well-Known Member
Mar 30, 2012
3,072
973
113
NYC
I might be interested in a C6105 versus a C6005. Moving to DDR3 helped a lot if I remember.
 

gmac715

Member
Feb 16, 2014
37
0
6
Do you know the model of node that takes DDR3? All of the C6105s I'm finding have nodes that are DDR2.
Referencing my earlier post, the C6105 was made available in the late summer of 2009 by Dell. The earlier versions of the C6105s used DDR2 RAM and came with the Opteron 24xx series. Most of the C6105s on Ebay are the earliest models that were custom configured in the 3-node chassis. The models that came after that included the Opteron 4xxx series processors. The more recent models of the Dell C6105 has DDR3 memory, Opteron 4xxx and 6xxx chips and notice that they also have an RJ-45 port configured to be the management port. I also believe the BIOS is manufactured by a different vendor on the more recent C6105 servers. The C6100 servers are the same as the C6105s except the 6100s use Intel processors and the 6105s use AMD processors.
 
Last edited:

Townsend911

New Member
Feb 19, 2014
2
0
1
Hello, just found this thread on these Ebay units. I have been trying to get information on these units and you have all supplied quite a bit already. Just wondered after Ken got his BIOS - Power settings straightened out? Did you ever check your power output after making the changes Ken? I am mostly interested in possibly using this setup for World Community Grid--Boinc applications. Looks like you already tested it on the one node and came to 130 watts. Just wasn't sure if that was at the 800mhz clock or the full 1800mhz clock speed. I currently am running an old 4p Opteron 8431 system which draws about 450 watts on average. I figured my watts per 1000mhz in processing power is about 7.81watts. If these units are pulling 130watts(each Node) that would be about 6 watts per 1000mhz of processing power. Pretty good efficiency compared to my current setup. If these 3 node systems pull 390(watts) full tilt that would be 64.8Ghz total processing power compared to my current 450(watts) 57.6Ghz total processing power on my 4 8431's.
Also, I would be putting this in my basement. Not sure if I saw any comments on the noise level that these operate at. I live in Michigan and currently the basement is around 56 degrees. How loud do these units get. Is there an option in the bios to control the fan speed.
Just wanted to thank everyone for posting their information in this forum.
 
Last edited:

Jeggs101

Well-Known Member
Dec 29, 2010
1,529
241
63
Hello, just found this thread on these Ebay units. I have been trying to get information on these units and you have all supplied quite a bit already. Just wondered after Ken got his BIOS - Power settings straightened out? Did you ever check your power output after making the changes Ken? I am mostly interested in possibly using this setup for World Community Grid--Boinc applications. Looks like you already tested it on the one node and came to 130 watts. Just wasn't sure if that was at the 800mhz clock or the full 1800mhz clock speed. I currently am running an old 4p Opteron 8431 system which draws about 450 watts on average. I figured my price per 1000mhz in processing power is costing about 7.81cents. If these units are pulling 130watts(each Node) that would put the cost at about 6 cents per 1000mhz of processing power. Pretty good efficiency compared to my current setup. If these 3 node systems pull 390(watts) full tilt that would be 64.8Ghz total processing power compared to my current 450(watts) 57.6Ghz total processing power on my 4 8431's.
Also, I would be putting this in my basement. Not sure if I saw any comments on the noise level that these operate at. I live in Michigan and currently the basement is around 56 degrees. How loud do these units get. Is there an option in the bios to control the fan speed.
Just wanted to thank everyone for posting their information in this forum.
If you wanted to stay AMD why not G34? That will be more efficient since you have fewer systems and associated components for each core and mhz
 

Townsend911

New Member
Feb 19, 2014
2
0
1
G34 Option

I would love to get a G34 board:p. My brother has a 4p G34 board....but the costs for this setup is to much for me at this point. DDR3 RAM costs a lot more...plus the CPU's are still a bit pricey. I have looked at the C32 socket as well. These newer generation boards/chipsets are still more than I want to spend at this point. If these 3-node systems run full power under 400 watts with just one hard drive and 16GIG of ram per node. I think I could live with that at around $350. Down the road I will probably grab some 8425 HE processors for my current setup. Other folks with the same setup as me (4P Supermicro board-naked) are pulling around 350 watts total power running these 2.1 Ghz chips. That still runs about 7 watts per 1000 mhz of processing power though. So possibly this 3 node system is a better value in that department. Hopefully Kens numbers were with his CPU's running at the 1800 Mhz clock. Or if someone else has checked their power usage that would be awesome. I know that it obviously will defer depending on the amount of RAM and Drives in use. I would probably opt for some 2.5" drives or SSD to bring down the power a little bit from what is stock on these units. I think I looked the specs up on the drives that come with these and I think they ran around 10-12 watts...somewhere's in that department.

Thanks
 

TangoWhiskey9

Active Member
Jun 28, 2013
402
59
28
Great points. Is WCG Bionic only based on mhz or also on how much a cpu can do per cycle? I thought there was a big upgrade with the 4100/ 6100 series opterons.
 

Ken

New Member
Feb 10, 2014
49
1
0
Just wondered after Ken got his BIOS - Power settings straightened out? Did you ever check your power output after making the changes Ken? I am mostly interested in possibly using this setup for World Community Grid--Boinc applications. Looks like you already tested it on the one node and came to 130 watts. Just wasn't sure if that was at the 800mhz clock or the full 1800mhz clock speed.
It turns out the benchmarks were identical with and without PowerNow! Enabled/disabled. Apparently the PowerNow! function throttled the speed of the CPU up to full-speed when the benchmark ran, it self-identified as 800 MHz before the benchmarks kicked-in. When I disabled PowerNow! the benchmark reported 1800MHz.

It was a non-issue, and probably safe to keep on.

As for the power, it too didn't change either with or without PowerNow! enabled - I only observed power usage when system was running full-throttle while the benchmark and installs were running.

As an interesting comparison, I was working on a Dell PE1950 earlier today, decent specs - two dual-core Xeon 5130s, 16 Gig RAM, 4x 2.5" SAS drives, and dual/redundant power supplies - it consumed about 320 watts, if my Kill-A-Watt is to be trusted...

Also, I would be putting this in my basement. Not sure if I saw any comments on the noise level that these operate at. I live in Michigan and currently the basement is around 56 degrees. How loud do these units get. Is there an option in the bios to control the fan speed.
The fan is about as loud as a hand-held DustBuster-type vacuum - loud enough to be annoying, but it should be fine in a basement setting...

Just wanted to thank everyone for posting their information in this forum.
No problem, my pleasure.
 

kdh

New Member
Feb 6, 2014
4
0
0
Just a quick update. Bought a Supermicro RSC-R1U-E16R riser card off of Amazon and then used it to successfull install a LSI 9260-4i RAID controller into one of our nodes. It took about an hour to re-run the new HDD cables; there are a lot of screws holding the fan brackets in place. The backplane has two sets of six SATA connectors, one on the left and one on the right. Keeping to the one row per node wiring, this means that half the cables go left and half of them go to the right. Just a little extra work to be aware of.
 

javi404

New Member
Jan 24, 2014
26
0
1
how in the world did you get esxi 5.5 to even boot the installer ?

im having all kinds of trouble with the three units i bought

i can only get centOS to run on mine.

they sold them to me as c6105s i was pissed.. but if i could run esxi 5.5 that would help me out a lot

really glad to find this forum .. probably gonna be my home page for a while

im doing a big openstack POC on these and so far its really hard machine to work with

doug
boot from cd (or remote)
when it asks if you want to enter any kernel parameters, hit SHIFT-O on the keyboard.
enter ignoreHeadless=TRUE
once installed, reboot,
Do same thing, SHIFT-O ignoreHeadless=TRUE
Then once you are on your first booted install of esxi, get to a console and enter this command:
"esxcfg-advcfg --set-kernel "TRUE" ignoreHeadless"

I found this at this link:
Dell CS24 ESXi 5.5 Install Stuck "Relocating modules and starting up the kernel..." | RobWillis.info
 

javi404

New Member
Jan 24, 2014
26
0
1
After catching up on this thread, wanted to post some additional info that may help.
This applies to machines that are DCS6005 model number.
They use 2x the EE version of the 1.8GHz x6 opteron CPU. (wikipedia has the exact model number)

[DISK WIRING]
I can confirm that mine came with the sata drives wired all crazy.
On the inside of the cover you can see what port is supposed to be port 1,2,3,etc.

Blue port is port 1 and the wires should have labels on them.

Looking at the board back to front, sata ports are:

2,1(blue)
4,3
5,6

I don't know why they were wired strange from assembly but mine was and it was annoying so I fixed it.

drives are wired in horizontal rows.
row1 (top) is node 1, left to right is 1,2,3,4
row2=node2
row3=node3

Also, I had one node that was flaky with hot swapping a disk. I don't know if it was a bios setting but rebooting the chassis fixed it and now I can plug in drives and ESXi 5.5 sees them.

I currently have it set to AHCI in the bios, not sure what AMD_AHCI does. Raid is useless unless you are running windows since its a sofware raid config from dothill.
I originally tried to setup a raid 5 in the firmware but ESXi just saw raw disks.

[NOISE]
Also, I have mine in the garage, its noisy enough that you don't want to have it in your office or where there are humans present most of the time.

[ESXi 5.5]

If you are installing ESXi5.5 lookup the ignoreHeadless=TRUE info in this thread.
or see this post: Dell CS24 ESXi 5.5 Install Stuck "Relocating modules and starting up the kernel..." | RobWillis.info

[Hot Swapability of Nodes]
Note that the nodes are not hot swappable, even if you are doing it manually by unplugging the power and plugging it back into the MB, the power surge will reset the other 2 nodes. Just something I found out the hard way.
 

au.to

New Member
Mar 10, 2013
17
0
1
I have purchased two of these off of eBay (probably from the same seller as javi). Both configurations were similar:

- Dell C6100/DCS6005
- 3 nodes
- Each node has 2x AMD Hex-Core CPUs & 48GB RAM
- One unit had 8 x 1TB drives and one unit had 9 x 1TB drives

I successfully installed Microsoft Hyper-V Server 2012 on each of the nodes in the 1st server and it is doing a great job of running multiple VMs for me. As my VMs are not very resource-intensive (small numbers of users), this configuration works beautifully.

HOWEVER, I attempted to run the same configuration on the 2nd server and have run into a disaster. It looks like the 2nd server either has a custom version of the BIOS or some other non-standard configuration that prevents the on-board GB NICs from working. I have tried almost everything... from loading fail-safe default BIOS settings to trying to install Intel GB NIC drivers... and the only thing left to do is to try to flash the BIOS. However, since I'm still within my warranty return period, I'm thinking about just returning the unit.

Has anyone else run into this issue... NICs don't work and are not recognized by OS?
I had an issue with an older DCS box and I needed to switch jumper to enable the 2nd nic.