Dell 3-Node AMD DCS6005

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

MiniKnight

Well-Known Member
Mar 30, 2012
3,072
973
113
NYC
Openstack probably. HPC not really. Slowest HPC cluster ever.

If you do Openstack these are almost ideal types of systems for a POC.

edit: NUMA basically talks about how memory is accessed when it is connected in multiprocessor systems. "Non Uniform memory access" if you want to google it.
 
Last edited:

Chuckleb

Moderator
Mar 5, 2013
1,017
331
83
Minnesota
NUMA is a memory model that came from the supercomputing world as computers were being linked together and is now becoming more important as we have multicore computers and multiprocessor boards. Most of these C6XXX systems were designed for compute clusters.

In a nutshell, NUMA answers the question of "If I have multiple CPUs and each CPU controls its own memory, what memory should I use?" NUMA-aware systems will (BIOS, OS, tools) will report back the map of how the CPUs are connected to memory and will show you weights of accessing memory. If you have a CPU A + B, and memory A1 and B1, it is faster to store and retrieve from memory A1 from A, than travelling to B and using B1. This gets complicated when you have chips like the AMD 6200/6300 with the multiple dies on a single chip, each die controlling its own memory. Knowing the layout allows you to do things like keep memory as close to the CPU as you can for optimal performance (memory pinning).

An example of how bad this can be is if you look at the block diagram for the 6200 CPUs and imagine accessing from the upper left to lower right core.
Opteron 6200 Processor

If you want to have fun, try tools like:
numactl --hardware -> to see the memory map
lstopo -> to see a really neat view of how the system is laid out.

Now mind you that this is really important for HPC where every little bit of power counts, but even in those cases, you're looking at about 15% performance hit operating at 100%, for normal use, it doesn't matter too much.


what is NUMA ?
 

Ken

New Member
Feb 10, 2014
49
1
0
I think I found the power setting - PowerNow! Under CPU Configuration. According to Wikipedia (!) it is similar to Intel's a SpeedStep technology, in that it throttles CPU speed based on demand. I'm running the STHbench.sh benchmark again as I write this, and it 'seems' faster.

Disabling PowerNow removes a PowerCap setting in BIOS that sets maximum processor speed Processor can hit.

As delivered, PowerNow! Was enabled and set to Power State "0", that resulted in the CPU appearing as an 800 MHz part, not sure what disabling PowerNow! Will do for the benchmark.

There is also a Dell Power Save option that was enabled by default, for now I'm leaving it on - it appears to lower the VID voltage (from memory).
 

Ken

New Member
Feb 10, 2014
49
1
0
if you are referring to my question on different auctions.. ..then the question is if this is the only difference. Yes, I'd put new sata or ssd into this puppy :)
There are (were) variations in the number of nodes (3 or 2), number of drives and amount of RAM. I saw 1U dual blade configs (I think) on eBay, they might have been 2U chassis with only 2 blades, but I doubt it.

I went for maximum (available) RAM in each node, with one HD per node. New HDs are cheap (TB for $60, enterprise SATA TB for around $100/ea), and remember, as shipped these units don't support SAS, not sure about SATA III (6 Gbs) drives being fully supported at SATA III speeds...

Remember, you can't add a fourth blade, you can't add a redundant PS - these chassis were custom built for three node configs, no more.

The eBay seller has additional PS and rack rails - I ordered a couple spare PS, as that seems to be a likely point of failure AND they seemed cheap ($30-40 for an 1,100 Watt PS).
 

Ken

New Member
Feb 10, 2014
49
1
0
Turning off PowerNow! In the BIOS caused the STHbench.sh benchmark to identify the CPU as a 1.8 GHz, not 800 GHz.

I'll dig through the results to see if turning it off impacted performance in this benchmark - I suspect it did, but I haven't gone through the numbers yet.

Is there someplace I should focus my review? The STHbench.sh.log file is quite large...
 

Ken

New Member
Feb 10, 2014
49
1
0
The results of the STHbench.sh benchmark appear to be similar with a PowerNow! Enabled or disabled.
 

MiniKnight

Well-Known Member
Mar 30, 2012
3,072
973
113
NYC
Turning off PowerNow! In the BIOS caused the STHbench.sh benchmark to identify the CPU as a 1.8 GHz, not 800 GHz.

I'll dig through the results to see if turning it off impacted performance in this benchmark - I suspect it did, but I haven't gone through the numbers yet.

Is there someplace I should focus my review? The STHbench.sh.log file is quite large...
I think a parser is a planned feature for the next version. I remember seeing it in the thread. The STHbench.sh was really just scripting what patrick was doing for the front page reviews.
 

gmac715

Member
Feb 16, 2014
37
0
6
Hello All

This has been a very good forum to read and it has helped solve some of my Dell server issues. I recently bought on Ebay a DELL C6100/C6105 CLOUD SERVER 6x 1.8GHz AMD 6 CORE (HEX CORE) 144GB RAM 3x 250GB. I originally wanted to install the free version of ESXI on all 3 nodes but I experienced a great deal of difficulty getting ESXI installed; however, the post in this thread about specifiy the "ignoreHeadless=TRUE" option helped me get free ESXI 5.5 installed. The ESXI client would not detect any of the SATA storage HDDs. I initially though this was due to driver support for the SB700 controller so I wiped the 5.5. installation of 5.1 (which installed straight through with no problems) but still couldn't detect the HDD devices. Through Google research, I later found out the the SB700 SATA Controller drivers have been included with ESXI since version 4.1.

The problem the entire time and still now is that several of the drive bays don't detect the perfectly good and working hard drives even though the green drive indicator light is on. For example, on drive bay 1-1 detects a hard drive whereas drive bay 1-2 and 1-3 will not detect the same hard drive when placed in those bays for node 1. I cross check this with the BIOS and the BIOS only detect the hard drives in bay 1-1. ESXI can see that drive as well and mount it as a storage device but not the drives in the other bays. The same for node 3 where drive bay 3-3 detects the working hard drive but not drive bay 3-1 and 3-2. I have made sure all cables are properly seated and connected on the system board and ACHI is set as the controller mode in the BIOS.

Are there other thoughts and suggestions on what I might try in order to get those drive bays to detect the drives? I have already ordered new HDDs to be shipped. This is for my home lab environment.

Thanks in advance for any thoughts/suggestions.
 

Ken

New Member
Feb 10, 2014
49
1
0
I just wanted to add another note about these systems - I added 3x SATA drives, one to each blade, and after doing so the blades wouldn't boot. Turns out the BIOS juggled the boot order of the HDs when I added a second one. A quick visit into the BIOS and changing the boot order for HDs corrected that. (HD boot priority/order in BIOS, as I recall)

In my three blade chassis, each blade is wired to four drive trays, and each blade is wired to one complete row of trays. Looking at the unit in front, the top row of trays are for blade one, the second row of trays are for blade two, and the bottom row is for blade three/four, as described on the sticker inside the system cover.

gmac715 - I wonder if you'd have better luck with your drives set as RAID? Also, I'd suggest investigating the cabling from the front plate to the blade MB - from memory I think there are more SATA slots than are wired to the front panel (6?), they may have been mis-wired at assembly, and if the system only had one HD installed at assembly, the bad wiring would not have been caught.

Is your system a C6005 like mine or a C6100/C6105 as you describe it?
 
Last edited:

Ken

New Member
Feb 10, 2014
49
1
0
I just threw two more drives on one blade, and had some interesting results...

Original config was one drive in far left bay.

Added second drive in second bay from left, boot order switched in BIOS.

Added two more drives in third and fourth bay from left, 'lost' second bay drive, two new drives appeared in windows server. The 'lost' drive lost it's identity, I assume because BIOS/SATA controller 'shuffled' drive assignments.

This blade (Tyan s8208) has SIX SATA ports, the BIOS shows six SATA ports, and the BIOS has an IDE setting that says it must be disabled to enable 6 SATA ports. The Tyan docs only describe 4 SATA ports.

All the above were done with SATA set to ACHI, I've not tried working with RAID settings yet...
 
Last edited:

Ken

New Member
Feb 10, 2014
49
1
0
On my box, the ports appear to be oddly wired:

First bay from left is wired to port 1 on the SATA controller
Second bay from left is wired to port 0 on the SATA controller
Third bay from left is wired to port 3 on the SATA controller
Fourth bay from left is wired to port 4 on the SATA controller

I found this by setting SATA mode to RAID and shuffling drives around, observing the changes in the drive listing.

Inside the RAIDcore firmware the drive order changes so that the lowest installed drive is 'first' - based on the above wiring info - which explains why my second drive, added to the second bay from the left, became the boot drive - is is on a lower numbered SATA port.

The above was observed on one blade, but based on boot behavior of all three blades I believe all are wired the same.

In my mind this confirms my suspicion that the SATA pots are wired in a non-logical sequence, and since (I assume) they never intended to install more than one drive, the actual wiring was irrelevant.

I am considering shuffling the SATA wiring so that one blade has six SATA bays and the other two blades have 3 each - that way one blade can act as a large storage server for the other two blades and the rest of my network.
 

Ken

New Member
Feb 10, 2014
49
1
0
The inboard RAID appears different than what I expected, based on the Tyan data sheet for this motherboard, has anyone worked on this RAID controller yet, is it decent? Does it support hot spare drives?
 

MiniKnight

Well-Known Member
Mar 30, 2012
3,072
973
113
NYC
Isn't it just the old AMD "RAID" controller? Or is there an external raid controller addin card?
 

Ken

New Member
Feb 10, 2014
49
1
0
Isn't it just the old AMD "RAID" controller? Or is there an external raid controller addin card?
Don't know - the firmware is invoked by pressing 'control-r' during system start (after BIOS), the RAID firmware refers to itself as 'RAIDcore from Dot Hill', so I think not, but I'm not familiar with "the old AMD RAID controller" you mention...

No add-in card in the blades in my system, I'm not even sure my blades have riser cards (don't remember seeing any, but didn't really look either).
 

Ken

New Member
Feb 10, 2014
49
1
0
I cracked open my case, there is no riser card on my blades, nor does it look like the blade frame could really accommodate one (IMHO).

The sticker inside my C6005 shows six SATA ports:

(Front of mb)

SATA B | SATA A
==========
SATA D | SATA C
==========
SATA F | SATA E

(Rear of MB)

And wiring is displayed as:

SATA A : HDD x-4
SATA E : HDD x-3
SATA B : HDD x-2
SATA D : HDD x-1

Where x is blade number (1/2/3)

I can't imagine why they are wired in this manner, but this is what is documented.