Dell 3-Node AMD DCS6005

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

BackupProphet

Well-Known Member
Jul 2, 2014
1,093
653
113
Stavanger, Norway
olavgg.com
Has anyone had problems with the SATA cabling? I have two nodes that can't find my harddisk on any of the slots. I've disconnect one cable with another one, and then it worked just fine. Do anyone know how I can get my hands to the backplate to disconnect and replace all cables? I dont know where I should unscrew...
 

Thanos

New Member
Jun 16, 2014
16
0
1
58
Has anyone had problems with the SATA cabling? I have two nodes that can't find my harddisk on any of the slots. I've disconnect one cable with another one, and then it worked just fine. Do anyone know how I can get my hands to the backplate to disconnect and replace all cables? I dont know where I should unscrew...
If the disks are seen using just a couple of SATA ports on the node, maybe you need to check the BIOS settings (assuming there's nothing wrong with the cabling!)
There's a setting that will prevent more than two disks to be seen. Under "advanced" you leave the IDE controller in the ENABLED setting. For the rest I have enabled AHCI mode for the mode.
Could you check the BIOS if IDE is disabled? I remember that I've read to leave it enabled so that more than 2 disks are seen for SATA...
 

Thanos

New Member
Jun 16, 2014
16
0
1
58
Has anyone had problems with the SATA cabling? I have two nodes that can't find my harddisk on any of the slots. I've disconnect one cable with another one, and then it worked just fine. Do anyone know how I can get my hands to the backplate to disconnect and replace all cables? I dont know where I should unscrew...
I have gone through the BIOS setup of one of the nodes. I have 12 disks so each node has 4 SATA disks connected. The images show how my BIOS is setup. The main thing to check is the Southbridge Config (in "Chipset" folder) and that the "SATA IDE Combined mode" is set to DISABLED. The comments at the right of this item are the key. If you want to have upto 6 SATA ports (that's the max onboard anyways), it should be set to DISABLED.

Let me know if that was the issue!
 

Attachments

Lichtwald

New Member
Jan 9, 2014
9
9
3
To dig up this thread before it gets cold... Has anyone been able to run one of these nodes outside of the chassis yet? I know it has a "Proprietary 20-pin (12V single input) power connector" but that it physically fits a standard 20-pin. I'm probably going to go through pin by pin with a multimeter one of these free afternoons.

After that gets taken care of, hopefully it is just a matter of figuring out which pins to short to turn it over like a normal motherboard...

Edit: I've fiddled a bit with the chassis a bit today.

Pins 1 through 8 are all 12V (1V when the node is off)
Pins (11 – 17 & 19) are all Ground
Pin 18 (The other yellow wire) is 5V
Pin 20 is brown and I'm not quite sure what it is yet. It goes to a 2 pin header on top of the power distribution board. It is most likely a PSU on logic line, but I need to test some more to be sure...

Also, as expected, the header U15 near the USB ports are the power switch pins.

Next step is to duplicate the connector and see what I can get to go.
 
Last edited:

Thanos

New Member
Jun 16, 2014
16
0
1
58
To dig up this thread before it gets cold... Has anyone been able to run one of these nodes outside of the chassis yet? I know it has a "Proprietary 20-pin (12V single input) power connector" but that it physically fits a standard 20-pin. I'm probably going to go through pin by pin with a multimeter one of these free afternoons.

After that gets taken care of, hopefully it is just a matter of figuring out which pins to short to turn it over like a normal motherboard...
Tried to check the user guides from DELL for similar boards (not the same...) but even these docs did not have the pin out for the power connectors. Maybe a h/w manual has it. The problem with these boxes is that they are custom made and we can't find decent (aka detailed) documentation. Neither Dell nor TYAN could provide something (I asked, both of them).

So, I guess you're a pioneer :) Even if you figure it out, you should take into account the cooling of the board. Seriously, my biggest issue (unless you run it in a cool/AC room) is the temp of "SR5650" (it's the main chipset, if not mistaken). The limit is 98 Celsius (!!) and board #2 (bottom right) has temps very close to that limit when I used it in my office (~24 Celsius ambient temp). I had to shut down that board after it started a continuous beep (aka >98 Celsius)

just FYI. Good luck with the power!
 

Lichtwald

New Member
Jan 9, 2014
9
9
3
Tried to check the user guides from DELL for similar boards (not the same...) but even these docs did not have the pin out for the power connectors. Maybe a h/w manual has it. The problem with these boxes is that they are custom made and we can't find decent (aka detailed) documentation. Neither Dell nor TYAN could provide something (I asked, both of them).

So, I guess you're a pioneer :) Even if you figure it out, you should take into account the cooling of the board. Seriously, my biggest issue (unless you run it in a cool/AC room) is the temp of "SR5650" (it's the main chipset, if not mistaken). The limit is 98 Celsius (!!) and board #2 (bottom right) has temps very close to that limit when I used it in my office (~24 Celsius ambient temp). I had to shut down that board after it started a continuous beep (aka >98 Celsius)

just FYI. Good luck with the power!
Thanks Thanos! Looks like I was editing my post while you replied! Anyway, I have a pair of low profile Swiftech waterblocks on order to cool the processors, but I haven't paid any thought to the chipsets. They seem like they are typical 40mm 2 screw, so while it would add to the BOM, it is more copper pipe I get to run :)
 

Thanos

New Member
Jun 16, 2014
16
0
1
58
Thanks Thanos! Looks like I was editing my post while you replied! Anyway, I have a pair of low profile Swiftech waterblocks on order to cool the processors, but I haven't paid any thought to the chipsets. They seem like they are typical 40mm 2 screw, so while it would add to the BOM, it is more copper pipe I get to run :)
watercooling! hm... good idea :) This is not a design for rooms that are not A/C'd. Yes, the chipset must be those two smaller ones closer to the back of the board. My guess is that the airflow back there isn't enough to cool them if the ambient air temp is higher than 21-22 Celsius (I was OK when I turned on the A/C :) ). It would be quite interesting to see photos of your finished water-cooling project :)

I wish you success!
 

s0lar_j3tman

New Member
Jul 17, 2014
1
0
1
41
Hey gang,

Lots of great information in this thread regarding this platform. My fellow nerds and I are as excited as some of you seem to be on what we can do with this box for a home lab setup.

There was something I was looking for an answer on that the internet can't seem to confirm, but I feel like its possible.

If we were using a RAID card like a 9650SE-12ML would it be possible to wire all the drive cages to single blade?
Purpose being use one blade as a storage appliance running FreeNAS or something similar.
And use the other two blades as ESXi VM hosts which leverage iSCSI for local storage from that FreeNAS; over the GB lan of course.

Any feedback from folks who have already put time into this platform have on that subject would be super appreciated.

Thanks
 

Lichtwald

New Member
Jan 9, 2014
9
9
3
Hey gang,

Lots of great information in this thread regarding this platform. My fellow nerds and I are as excited as some of you seem to be on what we can do with this box for a home lab setup.

There was something I was looking for an answer on that the internet can't seem to confirm, but I feel like its possible.

If we were using a RAID card like a 9650SE-12ML would it be possible to wire all the drive cages to single blade?
Purpose being use one blade as a storage appliance running FreeNAS or something similar.
And use the other two blades as ESXi VM hosts which leverage iSCSI for local storage from that FreeNAS; over the GB lan of course.

Any feedback from folks who have already put time into this platform have on that subject would be super appreciated.

Thanks
The HD bays are each wired in sets of four to the nodes, and the cables are super easy to reroute if you want to put all 12 to the same node. Each board has 6 SATA ports, so with a card and a riser it'll be easy to centralize the storage. I was debating doing this myself before I decided to embark on this silly build.

That being said, I ordered a new Seasonic 650W with a single 12V rail (don't want any circular current...) and wired up a generic adapter cable that can go on any PSU


I have a few more pictures including the chipset waterblocks installed over here
 

teCh0010

New Member
Jun 17, 2014
4
0
1
45
I read through the "taming the 6100" thread about replacing the fans in the intel 6100 version, has anyone replaced the fans in these with the dell takeoff, super micro, or every cool options?
 

John C. Knight

New Member
Jul 21, 2014
3
0
1
45
Just got this server, managed to get impi working (though it does not seem to have the ability to do vLAN tagging). I loaded esxi 5.5 with the ignoreHeadless=TRUE. No issues. What is weird, is I am noticing that overnight the blades go unresponsive. impi works, but the OS is down, I have to reset them. Consistently with Blade 1 and 2, while 3 seems stable. Anyone else run into this?
 

Lichtwald

New Member
Jan 9, 2014
9
9
3
Just got this server, managed to get impi working (though it does not seem to have the ability to do vLAN tagging). I loaded esxi 5.5 with the ignoreHeadless=TRUE. No issues. What is weird, is I am noticing that overnight the blades go unresponsive. impi works, but the OS is down, I have to reset them. Consistently with Blade 1 and 2, while 3 seems stable. Anyone else run into this?

I've had one of my three blades be a little flakey, but after testing all the memory I found it had a bad stick in it, so after removing that all is well.

I still haven't futzed with the IPMI on these. Did you reflash it with the link from the earlier discussion or was your ready to go out of the box?
 

John C. Knight

New Member
Jul 21, 2014
3
0
1
45
I was ready to go out of the box with IPMI. I think I may have gotten to the bottom of the reboot. I disabled NUMA and I have been up 24 hours now......
 

gmac715

Member
Feb 16, 2014
37
0
6
It appears one of my nodes has a corrupted BIOS. I am able to install ESX 5.5 but when I try and pull that host into vCenter it fails. When looking at the BIOS supplied information on the host, a portion of the information is scrambled. Any ideas how to resolve this?
That is very similar to the issue I was having on one of the physical nodes when trying to install vSphere 5.5 back in February. I was indeed able to finally get it to work with a link suggested in one of the earlier posts: Dell CS24 ESXi 5.5 Install Stuck "Relocating modules and starting up the kernel..." | RobWillis.info

I had to interrupt the installation process to specify a config setting to continue and get through the esxi 5.5 installation. I am running vSprere 5.5 and vCenter on all 3 nodes of one of these servers.
 

gmac715

Member
Feb 16, 2014
37
0
6
Status update for whom it may be useful :)
two of the nodes are setup with ESXi 5.5 update-1. One of them is functional in terms of the compute part as well as the Web access of the IPMI interface. The second node is functional only for the compute part. The IPMI interface is responding only to ipmitool (from a Mac) but not via the Web interface. I have no clue how this can be fixed. Both nodes can see the 4 disks attached to them. No issues there. And I have ESXi installed on a USB (Sandisk Cruzer 8GB) in the onboard USB port. Now, the 3rd node is useless so far. The funny part is that it's IPMI is OK (web and use from a tool like 'ipmitool'). But when it powers on, there's no VGA output and there's continuous 'beep' till I power it off.

I removed the 'button' battery to reset the BIOS to the default settings. Nothing.. IPMI reports "VBAT = 0 Volts" while the other 2 nodes have a value and reported as Normal this is reported as "Lower Critical".
Without documentation I have no clue what it is. I have opened a ticket to TYAN for help on the S8208 board so as to make sense of all this. They replied after a couple of hours (!) saying that I should seek support from DELL since this board was made for DELL. Not very helpful since Product Support from Dell doesn't recognize the Service Tag of this box...

I will open a ticket with Dell. Just for the fun of it. It's a big disappointment that it's so damn hard to get a proper manual for a board. If anyone has a user guide for *this* specific board please share it.
I'm expecting a USB-to-serial cable tomorrow to connect it the COM port of the board. Just in case that I get something done via the old days ways...

By the way, since some guys have expressed the thought to install and experiment with VMware's VSAN on these boxes, wait for 5.5 update 2. It appears that the AHCI driver will be "updated" in Update-2. I could not get it to work in 5.5 update 1. (Kernel times out on trying to use the SATA drives...). Other than that, I think this server is a bargain simply for the amount of DRAM (in virtualization it's the best asset to have ;)).

I will report back with news on this front. Or I will purchase an extra barebones server to replace the failing node...
I actually went through much of this myself back in February (see my posts) and I sympathize with your ordeal. I ended up getting a bare-bones server from the seller on EBay just to use as a parts server. I replaced the problem node with one of the nodes on the bare-bones server back in March and it has worked well for me since. It is very useful having this home lab setup with these servers. I will say I have had to replace 2 hard drives already and I just lost a Baracuda ES2 750GB hard drive today. The first 2 losses were on the same drive bay but the loss today is on a different drive bay. I guess I do expect to loose a few drives occasionally when trying to run a home data center (smile). Having the data center though has greatly helped me to learn a lot about networking, Linux, and VMware. So I guess it has been worth the setbacks that I had when I first started back in February.
 

teCh0010

New Member
Jun 17, 2014
4
0
1
45
I ended up swapping the fans with dell optiplex sanyo denki san ace 80 pulls. The are currently running at 3k RPM with all three systems running. Blade 2 is the hottest running 50C under moderate load in an air conditioned but still warm room. System is much quieter.

I was also able to get VMware Distributed Power Management to power back on a server through the IPMI interface, so I'll let DPM power one off when it doesn't need it.
 

John C. Knight

New Member
Jul 21, 2014
3
0
1
45
Anyone experience blades just halting? I am fairly certain it is not OS related as all 3 blades are running esxi 5.5 and only blades 1 and 2 are affected, 3 stays up. I enabled remote dump collection to see if it was ESXi purple screening, but it is not. The blade just stops and it only seems to happen overnight.....