Dell C6100 XS23-TY3 2U 4-Node (8 CPU) Cloud Server

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

RimBlock

Active Member
Sep 18, 2011
837
28
28
Singapore
Nope, never seen this.

You may need to check the wiring to the fan controller board. Some of the cables may be connected incorrectly.

RB
 

RimBlock

Active Member
Sep 18, 2011
837
28
28
Singapore
Those are really iffy in a c6100. I put four into a c6100: Two worked fine, one worked until you bumped the chassis, and one never worked. It turns out - surprisingly - that they position the drive a few millimeters further forward than other solutions, which means that your drives aren't fully inserted into the backplane.
Strange, I have sold around 20 of these and no complaints or returns (if they didn't work I would expect to hear about it :) ).

Did yours come from a single batch or multiple batches from multiple sellers etc ?.

Cheers
RB
 

RimBlock

Active Member
Sep 18, 2011
837
28
28
Singapore
Another weird one....

So I funally deracked my own unit where the 2nd node front panel has never worked and took it apart to try and resolve the problem. The fan controller front panel headers were not broken and the cable was connected properly to the front panel. Turns out there were a couple of bent pins on the front panel connectors for that node on the fan controller board. Luckily I have a coupleof spares for the older FCB so I swapped it out. Now I am getting the orange flashing lights.

This is usually fixed by putting in a second PSU and seems to be a warning that you have a PSU failure. All well and good so far...

The weird part is that on one node I have installed Windows 2012 R2 Essentials (bare metal) and the front panel button for that node no longer flashes any more (the other nodes are still flashing).

RB
 

mason736

Member
Mar 17, 2013
111
1
18
I noticed that the bottom nodes (which are #2 and #4) will not power on if the top nodes (which are #1 or #3) are not powered on

Has anyone else observed this?
I haven't noticed that exactly. Do you have 2 power supplies or 1? I have dual 1100 watt power supplies, and i've noticed that nodes 1 and 3 run off the top power supply, and does 2 and 4 run off the lower supply.
 

Clownius

Member
Aug 5, 2013
85
0
6
Yeah i can run any node independent or all at once or even random selections. Heck i even tested and i can pull either PSU and all 4 nodes keep running. Im running twin 1100W PSU's in both my C6100's. Both seem to work that way.

Technically they should too if you look at the power distribution wiring. it setup so either PSU can power every node. Its basically why redundant power supplies exist. Plug them into 2 different power sources. If one fails the other powers everything
 

lmk

Member
Dec 11, 2013
128
20
18
Yes (to confirm even further) I can power on any combination of the 4 nodes, with only one of the two PSUs connected to a power outlet.
 

presmike

New Member
Feb 22, 2014
5
1
3
So, wow! I spent about 2 hours last night reading through this post. I made it about half way through (lots of great pics and links which got me side tracked). So my main quesiton is this. I am looking at this box: Dell C6105 Cloud Server 6X 1 8GHz AMD 6 Core 72GB RAM 3X 250GB C6100 Series | eBay

Here are a few questions before I buy:

1. AMD vs Intel. I grew up really caring about these wars (first pc was a 486 DX4x100) but haven't paid a lot of attention in the last 3-5 years. Seeing as this thread covers the Intel ones what are your guys thoughts on the box I am considering? Obviously the Intel version is more expensive and I am not sure I get much more for it.

2. I assume IPMI isn't hard to figure out (boot locally and set the IP then hit it over the browser like you would a DRAC or ILO?)

3. Networking. I currently run 1gb copper to all my endpoints at my house. On a Dell PowerConnect 2716. I have never played with 10gb or non-cat5 typical networking. What would be a *reasonable* cost to try some of the more exotic solutions you guys are talking about (switches, cards, cables, etc) for the servers and 3-4 desktops in my house? Also what tech would you recommend i look at if going with something more exotic.

Use Cases/Background:

1. Im a geek at heart and love learning new things.
2. I have a DS1511+ synology with a D513 so I don't really need a NAS but may experiment with ZFS for fun
2.5: 5x3TB in raid5 and 5x4TB in raid6
3. I will likely host a few VMs (win7, ubu) and may try OSX or FreeBSD for fun too.
3.5: P2Ving my work laptop as I type so I don't have to bring it home anymore ;)
4. It will be in a closet next to my kitchen (being a single guy not a problem and i'll try silencing it per the other thread) but I am slightly concerned the closet doors have a decent gap and it may be rather audible.
5. FIOS connection 150mb/70mb but only 1 live IP :(
6. I realize I don't have a good excuse for buying it but I want a new fun project that doesn't totally break the bank. Any other ideas would be great.

Lastly, should I be looking at a 4 node bc... its not much more? Links to better boxes greatly appreciated.
 
Last edited:

Clownius

Member
Aug 5, 2013
85
0
6
Its going to be seriously noisy. I had one running in my garage. You could hear it inside the house and half way down the driveway.

Its a Dell 6105 if you want to look up more details. Rarer beast than the 6100

Now i dont know much about AMD server chips. But knowing their desktop chips my main concerns would be power consumption and heat. More heat equals more noise as the only cooling is 4 fans.

The nodes are certainly different. Im not sure if the AMD version has a Mezzanine slot. The Ethernet ports look to be in the way. Most of the 10G Networking options are Mez cards that i have seen mentioned. I personally only run Gigabit.

All the AMD's i have seen for sale only run 3 node. My guess the power and heat side of things. But im not sure. Dells website claims 4 node options.

Personally id want to see how the thing is configured inside before i made the call. Check things like Mezzanine slots. See how modifiable they are for the future. But it depends on your use cases.

Edit: The pics i can see on Dells website have a different Ethernet port setup to the ones in the pic on the one your buying. The Dell website shows 3 Ethernet ports wide with one being a IPMI port. The ones on the sellers pages show just 2 high so does it still have IPMI? Maybe DCS does 6105's too.........

Edit2: Its the early AM here so maybe i missed something. If so forgive me and correct me lol
 
Last edited:

presmike

New Member
Feb 22, 2014
5
1
3
Yea, that seller seems to use the same picture for all their 6105s. I am also concerned / confused about the IPMI. Maybe I should consider spending a bit more and getting something I have more confidence in. I see the xs24s are selling now. Worth the upgrade or stick with a 23?
 

RimBlock

Active Member
Sep 18, 2011
837
28
28
Singapore
I noticed that the bottom nodes (which are #2 and #4) will not power on if the top nodes (which are #1 or #3) are not powered on

Has anyone else observed this?
Thinking about this...

You may want to check the power distribution boards (the ones the PSUs slot in to) actually have the bridging piece installed (verticle piece connecting the bottom PDB to the top one. I would imagine a missing bridge may give the same issues.

RB
 

M.Holder

New Member
Feb 5, 2014
9
0
1
Germany
Yea, that seller seems to use the same picture for all their 6105s. I am also concerned / confused about the IPMI. Maybe I should consider spending a bit more and getting something I have more confidence in. I see the xs24s are selling now. Worth the upgrade or stick with a 23?
Is this really an upgrade?
XS24s on ebay are using 771 cpus...
But they are selling really cheap (@!#? the shipping to germany -.-)
 

lmk

Member
Dec 11, 2013
128
20
18
Is this really an upgrade?
XS24s on ebay are using 771 cpus...
But they are selling really cheap (@!#? the shipping to germany -.-)
No, the XS24 model is NOT an upgrade. It is an alternate and older generation model.

Some big differences (comparing its worse specs versus a XS23-TY3 model):

Older 54xx and 53xx CPUs (vs 55xx and 56xx) = the CPUs alone bring OTHER big improvements with added AES encryption acceleration, lower VM latencies, memory improvements, etc
DDR2 memory (vs DDR3)
ICH9 chipset (vs ICH10)
6 Memory slots (vs 12)
3 nodes maximum (vs 4 nodes)
No Mezzanine slot and cutout
2 NICs (vs 2 NICs plus an extra 1 optional dedicated IPMI)

etc
etc
 

idea

Member
May 19, 2011
86
5
8
I noticed that the bottom nodes (which are #2 and #4) will not power on if the top nodes (which are #1 or #3) are not powered on

Has anyone else observed this?
Nope, never seen this.

You may need to check the wiring to the fan controller board. Some of the cables may be connected incorrectly.

RB
I haven't noticed that exactly. Do you have 2 power supplies or 1? I have dual 1100 watt power supplies, and i've noticed that nodes 1 and 3 run off the top power supply, and does 2 and 4 run off the lower supply.
I can power on node 2 and 4, without having node 1 or 3 on.
Yeah i can run any node independent or all at once or even random selections. Heck i even tested and i can pull either PSU and all 4 nodes keep running. Im running twin 1100W PSU's in both my C6100's. Both seem to work that way.

Technically they should too if you look at the power distribution wiring. it setup so either PSU can power every node. Its basically why redundant power supplies exist. Plug them into 2 different power sources. If one fails the other powers everything
Yes (to confirm even further) I can power on any combination of the 4 nodes, with only one of the two PSUs connected to a power outlet.
Thinking about this...

You may want to check the power distribution boards (the ones the PSUs slot in to) actually have the bridging piece installed (verticle piece connecting the bottom PDB to the top one. I would imagine a missing bridge may give the same issues.

RB
Thank you all! RimBlock, you were correct. One of the molex connectors on the PDB was disconnected. I must have done that months ago and forgot about it. I can now power on nodes 1/2/3/4 individually
 

Tamerz

New Member
Feb 25, 2014
2
0
0
Hello. I saw this thread and this looks like a great idea for some non-production ESXi hosts. I read through a lot of the pages here but am having a hard time confirming a few things considering how many configurations there can be. On eBay, I'm looking at one of these servers:

Dell PowerEdge C6100 XS23 TY3 Server 24 Bay 4 Node 8x Intel Xeon L5520 96GB RAID | eBay

It looks like I should be able to throw 24 of these drives in it:

Dell 146GB 10K 2 5" 6Gbps SAS Hard Drive for PowerEdge C6100 | eBay

Then have 6 drives in a RAID 10 configuration per node. Am I correct in assuming this? I know little about Mezzanine cards etc unfortunately. Is the idea to have one RAID card that serves all 4 nodes, but each node sees it as its own?

If this has been covered I'm sorry but I was unable to find it.
 

lmk

Member
Dec 11, 2013
128
20
18
Hello. I saw this thread and this looks like a great idea for some non-production ESXi hosts. I read through a lot of the pages here but am having a hard time confirming a few things considering how many configurations there can be. On eBay, I'm looking at one of these servers:

Dell PowerEdge C6100 XS23 TY3 Server 24 Bay 4 Node 8x Intel Xeon L5520 96GB RAID | eBay

It looks like I should be able to throw 24 of these drives in it:

Dell 146GB 10K 2 5" 6Gbps SAS Hard Drive for PowerEdge C6100 | eBay

Then have 6 drives in a RAID 10 configuration per node. Am I correct in assuming this? I know little about Mezzanine cards etc unfortunately. Is the idea to have one RAID card that serves all 4 nodes, but each node sees it as its own?

If this has been covered I'm sorry but I was unable to find it.
You can throw 24 drives in it, but by default wiring it will only allow each of the 4 nodes to access 6 drives (24 drives/4 nodes). Each node has one RAID mezzanine card (DELL LSI) with the physical ports being 1 x SAS connector (equals 4 SATA ports) and 2 x SATA connectors for the 6 x total drives. Also the drives will only work as SATA - see below.

RAID will depend on the OS installed, as it is not a hardware ("proper") RAID card and instead is software. If you use Windows, in general, you could do RAID 0, 1, or 10, depending on the version and drivers. Other OSes (without the support/drivers) will not understand the setup; this is why people who install something like ESXi do not see the aggregated logical "RAID" disk, but instead see each individual disk.

For RAID that will work with any (supported) OS, do proper hardware (accelerated) RAID, AND allow for SAS drives to work (at their fullest spec), then you need to get the upgraded DELL LSI mezzanine SAS card.
 
Last edited:

Tamerz

New Member
Feb 25, 2014
2
0
0
You can throw 24 drives in it, but by default wiring it will only allow each of the 4 nodes to access 6 drives (24 drives/4 nodes). Each node has one RAID mezzanine card (DELL LSI) with the physical ports being 1 x SAS connector (equals 4 SATA ports) and 2 x SATA connectors for the 6 x total drives. Also the drives will only work as SATA - see below.

RAID will depend on the OS installed, as it is not a hardware ("proper") RAID card and instead is software. If you use Windows, in general, you could do RAID 0, 1, or 10, depending on the version and drivers. Other OSes (without the support/drivers) will not understand the setup; this is why people who install something like ESXi do not see the aggregated logical "RAID" disk, but instead see each individual disk.

For RAID that will work with any (supported) OS, do proper hardware (accelerated) RAID, AND allow for SAS drives to work (at their fullest spec), then you need to get the upgraded DELL LSI mezzanine SAS card.
Thank you, that is very helpful. So I would need one of these for each node to get SAS and hardware RAID?

Dell 6g SAS SATA LSI SAS2008 MEZZANINE Daughter Card PE C6100 C6145 XX2X2 | eBay
 

RimBlock

Active Member
Sep 18, 2011
837
28
28
Singapore
Thank you, that is very helpful. So I would need one of these for each node to get SAS and hardware RAID?

Dell 6g SAS SATA LSI SAS2008 MEZZANINE Daughter Card PE C6100 C6145 XX2X2 | eBay
The unit you linked lists the LSI 1068 mez card as being included (Y8Y69).

This is fine for what you want. The more expensive and rarer LSI 2008 based controller is important if you want to populate with SATA II SSDs and have them run at full speed. The ones you would get with that unit are fine for hdds (SATA or SAS).

Rb
 

root

New Member
Nov 19, 2013
23
0
1
Quick question: are all 4 nodes on C6100 identical? Friend of mine just got C6100 server and two of the nodes are slightly different cabling.

Is that normal or he got nodes that belong to two different servers?

 
Last edited: