Dell 3-Node AMD DCS6005

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

gmac715

Member
Feb 16, 2014
37
0
6
That is very interesting that the drives aren't hot swappable. I thought hot-swap was an inherent part of the SATA specification.
 

gmac715

Member
Feb 16, 2014
37
0
6
2 weeks ago when I got the first C6105 (DCS6005 on the inside of the chassis cover label) i tried to install ESXI 5.5 (primarily because I have 48GB RAM per node and free ESXI 5.1 RAM limit is 32) and I could boot the install from CD or USB but even with the ignoreHeadless option I still ran into a few issues with the install and I abandoned it in favor of the 5.1 install which installed straight through with no problems.

I plan to clone the drive the 5.1 install is on and back it up to my NAS then try to upgrade it to 5.5 (free version) today. I will then try to install 5.5 on node 2 with all blank drives later tonight.
 

gmac715

Member
Feb 16, 2014
37
0
6
I am curious as to what brand hard drives many of you are using. I plan to buy more hard drives but I would prefer to look for those brands others are using that works well for them.
 

Ken

New Member
Feb 10, 2014
49
1
0
That is very interesting that the drives aren't hot swappable. I thought hot-swap was an inherent part of the SATA specification.
Just because something is part of the Specification, doesn't mean it's implemented... For example, Port Multiplier support is part of the specification, yet it is only implemented by certain controllers, not all.

Remember, these are purpose-designed machines with exactly one button on the server (blade) - the power button. The application these machines were built for (typically) would simply power off the machine, replace the failed HD (the only HD in the node) and power the node back up. It would network boot, download it's OS image and rejoin the cluster without any further intervention. In such an environment, hot-swap support is not needed.
 

Ken

New Member
Feb 10, 2014
49
1
0
Quick question...

I'm curious, what are people doing with these servers? Mine is to replace a motley collection of white box hand-made servers on my home training 'lab', not for production use.

I must admit to being pleasantly surprised that all three nodes consume less than 400 watts total when 'active'. That beats two of my previous servers, and the increase in RAM/CPU cores made these an unbeatable deal for home use (IMHO).

So, what are your C6005 servers up to?

(And why are people using C6005/C6105 interchangeably?)
 

Ken

New Member
Feb 10, 2014
49
1
0
Just a quick update. Bought a Supermicro RSC-R1U-E16R riser card off of Amazon and then used it to successfull install a LSI 9260-4i RAID controller...
I'm curious, did you attach the riser to the blade chassis? I eyeballed it on my C6005 and the chassis seemed to have the bracket for securing the riser right where the riser should be...

I might have a use to add another NIC to each blade, but not a pressing need at this point.
 
Last edited:

javi404

New Member
Jan 24, 2014
26
0
1
That is very interesting that the drives aren't hot swappable. I thought hot-swap was an inherent part of the SATA specification.
They are, I think it it was just a strange bug. It only happened once and since then I have been able to confirm hot swap of disks.
 

javi404

New Member
Jan 24, 2014
26
0
1
I'm curious, what are people doing with these servers? Mine is to replace a motley collection of white box hand-made servers on my home training 'lab', not for production use.

I must admit to being pleasantly surprised that all three nodes consume less than 400 watts total when 'active'. That beats two of my previous servers, and the increase in RAM/CPU cores made these an unbeatable deal for home use (IMHO).

So, what are your C6005 servers up to?

(And why are people using C6005/C6105 interchangeably?)
I did exactly the same thing. The last whitebox server I will power down will be my NFS server where most of my VMs are stored. The third node will be the NFS server once I order some drives and move things over. Less important virtual machines will remain on local storage of those nodes.
 

javi404

New Member
Jan 24, 2014
26
0
1
I have purchased two of these off of eBay (probably from the same seller as javi). Both configurations were similar:

- Dell C6100/DCS6005
- 3 nodes
- Each node has 2x AMD Hex-Core CPUs & 48GB RAM
- One unit had 8 x 1TB drives and one unit had 9 x 1TB drives

I successfully installed Microsoft Hyper-V Server 2012 on each of the nodes in the 1st server and it is doing a great job of running multiple VMs for me. As my VMs are not very resource-intensive (small numbers of users), this configuration works beautifully.

HOWEVER, I attempted to run the same configuration on the 2nd server and have run into a disaster. It looks like the 2nd server either has a custom version of the BIOS or some other non-standard configuration that prevents the on-board GB NICs from working. I have tried almost everything... from loading fail-safe default BIOS settings to trying to install Intel GB NIC drivers... and the only thing left to do is to try to flash the BIOS. However, since I'm still within my warranty return period, I'm thinking about just returning the unit.

Has anyone else run into this issue... NICs don't work and are not recognized by OS?
Enable NUMA in the bios and see if that fixes it.
Wish I saw this post when I learned that info from the vendor.
 

gmac715

Member
Feb 16, 2014
37
0
6
After I made a disk image of the drive that was hosting ESXI 5.1, I was indeed able to get through the upgrade to ESXI 5.1. In addition, I have also been able to get ESXI 5.5 (free) installed on another node as a fresh install. The link that was posted in this forum regarding the ignoreHeadless=TRUE in the install options was a very critical piece that enabled me to get through the install. In addition, by way of the install best practices page on VMWare's site and others posting about their install experience, it is advisable to go into the BIOS->Advanced->CPU and disable the NUMA setting. Also, during the install process when you are asked to hit the Shift+O to enter the ESXI options, make sure that you simply "space" and then enter the "ignoreHeadless=TRUE". In other words, you want to ensure that you append to the options that will already be listed there. After the reboot when you are asked to Shift+O again you will be presented with a long line of multiple options but once again just space and append the "ignoreHeadless=TRUE" to it and enter.
 
Last edited:

gmac715

Member
Feb 16, 2014
37
0
6
I did exactly the same thing. The last whitebox server I will power down will be my NFS server where most of my VMs are stored. The third node will be the NFS server once I order some drives and move things over. Less important virtual machines will remain on local storage of those nodes.
I am using the server for my home lab which will allow me to get rid of a lot of smaller machines and laptops since I am now able to consolidate and have a much smaller physical footprint and use less energy. I'm really just getting starting with server virtualization so this gives me a very good set of tools to work with.
 

gmac715

Member
Feb 16, 2014
37
0
6
Just because something is part of the Specification, doesn't mean it's implemented... For example, Port Multiplier support is part of the specification, yet it is only implemented by certain controllers, not all.

Remember, these are purpose-designed machines with exactly one button on the server (blade) - the power button. The application these machines were built for (typically) would simply power off the machine, replace the failed HD (the only HD in the node) and power the node back up. It would network boot, download it's OS image and rejoin the cluster without any further intervention. In such an environment, hot-swap support is not needed.
You make a very good point. I think I was just surprised that a server with multiple storage bays would be limited by lack of hot-swap functionality.
 

gmac715

Member
Feb 16, 2014
37
0
6
Interested to see how ESXi 5.5 goes
I now have free ESXi 5.5 installed and configured on all 3 nodes. See my other posting about some of the specifics using the info that others in this forum referenced.
 
Last edited:

gmac715

Member
Feb 16, 2014
37
0
6
My next task is to create a few server OS templates and practice deploying those. If any of you have any good links or info for configuring various server OSs in VMWare that has been helpful to you feel free to share them. Thanks in advance.
 

gmac715

Member
Feb 16, 2014
37
0
6
What is it about NUMA that hinders ESXi 5.5 installation AND benefits Intel NIC operation?
I haven't had any NIC issues so I wasn't really researching from that aspect. However, there is plenty of guidance out there on ESXi and NUMA. I was simply reading all of the best practices for preparing to install ESXi 5.5.

There are 2 links in particular that I used to make the decision to disable NUMA in the BIOS based on how node interleaving could impact performance.

VMware KB: ESXi/ESX Memory Management on Systems with AMD Opteron Processors

"When a processor accesses memory that does not lie within its own node (remote memory), the data must be transferred over the NUMA interconnect, which is slower than accessing local memory. Thus, memory access times are “non-uniform,” depending on the location of the memory, as the technology's name implies.

Node interleaving option is not a requirement for ESX to function.
"

VMware vSphere 4 - ESX and vCenter Server

"In most situations, an ESX/ESXi host’s automatic NUMA optimizations result in good performance."
and
"Manual NUMA placement might interfere with the ESX/ESXi resource management algorithms, which try to give each virtual machine a fair share of the system’s processor resources. For example, if ten virtual machines with processor-intensive workloads are manually placed on one node, and only two virtual machines are manually placed on another node, it is impossible for the system to give all twelve virtual machines equal shares of the system’s resources."
 

Chuckleb

Moderator
Mar 5, 2013
1,017
331
83
Minnesota
If you disable NUMA in the BIOS, ESX won't be able to do its optimization. You are basically disabling it globally and now the system will think that the memory is all on the same bus and all the same speed I would think. The system would have no knowledge of the node layout.
 

gmac715

Member
Feb 16, 2014
37
0
6
If you disable NUMA in the BIOS, ESX won't be able to do its optimization. You are basically disabling it globally and now the system will think that the memory is all on the same bus and all the same speed I would think. The system would have no knowledge of the node layout.
I'm really just jumping into server virtualization and modern server know-how altogether. I was thinking that the NUMA option enabled would lock down an evenly distributed set of resources for each CPU node whereas the ESXI engine automatically distributes resources when and where needed and this ability is obstructed if NUMA is enabled. I certainly could have misinterpreted the information.