I will follow up. I am using Clonezilla to back up the 5.1 drive right now.Interested to see how ESXi 5.5 goes
Just because something is part of the Specification, doesn't mean it's implemented... For example, Port Multiplier support is part of the specification, yet it is only implemented by certain controllers, not all.That is very interesting that the drives aren't hot swappable. I thought hot-swap was an inherent part of the SATA specification.
I'm curious, did you attach the riser to the blade chassis? I eyeballed it on my C6005 and the chassis seemed to have the bracket for securing the riser right where the riser should be...Just a quick update. Bought a Supermicro RSC-R1U-E16R riser card off of Amazon and then used it to successfull install a LSI 9260-4i RAID controller...
They are, I think it it was just a strange bug. It only happened once and since then I have been able to confirm hot swap of disks.That is very interesting that the drives aren't hot swappable. I thought hot-swap was an inherent part of the SATA specification.
I did exactly the same thing. The last whitebox server I will power down will be my NFS server where most of my VMs are stored. The third node will be the NFS server once I order some drives and move things over. Less important virtual machines will remain on local storage of those nodes.I'm curious, what are people doing with these servers? Mine is to replace a motley collection of white box hand-made servers on my home training 'lab', not for production use.
I must admit to being pleasantly surprised that all three nodes consume less than 400 watts total when 'active'. That beats two of my previous servers, and the increase in RAM/CPU cores made these an unbeatable deal for home use (IMHO).
So, what are your C6005 servers up to?
(And why are people using C6005/C6105 interchangeably?)
Enable NUMA in the bios and see if that fixes it.I have purchased two of these off of eBay (probably from the same seller as javi). Both configurations were similar:
- Dell C6100/DCS6005
- 3 nodes
- Each node has 2x AMD Hex-Core CPUs & 48GB RAM
- One unit had 8 x 1TB drives and one unit had 9 x 1TB drives
I successfully installed Microsoft Hyper-V Server 2012 on each of the nodes in the 1st server and it is doing a great job of running multiple VMs for me. As my VMs are not very resource-intensive (small numbers of users), this configuration works beautifully.
HOWEVER, I attempted to run the same configuration on the 2nd server and have run into a disaster. It looks like the 2nd server either has a custom version of the BIOS or some other non-standard configuration that prevents the on-board GB NICs from working. I have tried almost everything... from loading fail-safe default BIOS settings to trying to install Intel GB NIC drivers... and the only thing left to do is to try to flash the BIOS. However, since I'm still within my warranty return period, I'm thinking about just returning the unit.
Has anyone else run into this issue... NICs don't work and are not recognized by OS?
I am using the server for my home lab which will allow me to get rid of a lot of smaller machines and laptops since I am now able to consolidate and have a much smaller physical footprint and use less energy. I'm really just getting starting with server virtualization so this gives me a very good set of tools to work with.I did exactly the same thing. The last whitebox server I will power down will be my NFS server where most of my VMs are stored. The third node will be the NFS server once I order some drives and move things over. Less important virtual machines will remain on local storage of those nodes.
You make a very good point. I think I was just surprised that a server with multiple storage bays would be limited by lack of hot-swap functionality.Just because something is part of the Specification, doesn't mean it's implemented... For example, Port Multiplier support is part of the specification, yet it is only implemented by certain controllers, not all.
Remember, these are purpose-designed machines with exactly one button on the server (blade) - the power button. The application these machines were built for (typically) would simply power off the machine, replace the failed HD (the only HD in the node) and power the node back up. It would network boot, download it's OS image and rejoin the cluster without any further intervention. In such an environment, hot-swap support is not needed.
I now have free ESXi 5.5 installed and configured on all 3 nodes. See my other posting about some of the specifics using the info that others in this forum referenced.Interested to see how ESXi 5.5 goes
In addition, by way of the install best practices page on VMWare's site and others posting about their install experience, it is advisable to go into the BIOS->Advanced->CPU and disable the NUMA setting.
What is it about NUMA that hinders ESXi 5.5 installation AND benefits Intel NIC operation?Enable NUMA in the bios and see if that fixes it.
I haven't had any NIC issues so I wasn't really researching from that aspect. However, there is plenty of guidance out there on ESXi and NUMA. I was simply reading all of the best practices for preparing to install ESXi 5.5.What is it about NUMA that hinders ESXi 5.5 installation AND benefits Intel NIC operation?
I'm really just jumping into server virtualization and modern server know-how altogether. I was thinking that the NUMA option enabled would lock down an evenly distributed set of resources for each CPU node whereas the ESXI engine automatically distributes resources when and where needed and this ability is obstructed if NUMA is enabled. I certainly could have misinterpreted the information.If you disable NUMA in the BIOS, ESX won't be able to do its optimization. You are basically disabling it globally and now the system will think that the memory is all on the same bus and all the same speed I would think. The system would have no knowledge of the node layout.