Dell C6100 XS23-TY3 2U 4-Node (8 CPU) Cloud Server

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Clownius

Member
Aug 5, 2013
85
0
6
This has probably been asked already, but this thread is getting pretty large: Is it possible to hook all 12 drive bays to one pcie HBA on one node? What are the connectors on the backplane? 4x SFF-8087? Would be nice to have one loaded node for ZFS using all bays, then rely on iscsi boot for the remaining nodes and go diskless.
On the actually backplane there is 12 Sata style connectors.
 

chune

Member
Oct 28, 2013
119
23
18
Thanks, after some more reading on this thread i think i understand. It is all clear now between this image:


and this post:
For those toying with the idea of modding a C6100, here is how the disk subsystem is built and wired:


The path goes like this:
Disk <-> Backplane <--> breakout Cable <--> Midplane board <--> Interposer board <--> Cable <--> Motherboard

That's a fairly complicated path, but the good news is that you can
"intercept" at several different points using standard cables and
connectors.


In more detail:
•The disks are in a hot-swap backplane, either twelve x 3.5" or 24 x 2.5" disks. I'm only going to describe the twelve disk version here. The 24 disk uses a different backplane and midplane.
•The backplane gets power from both power supplies for redundancy. Voltages are standard, but the power connectors are not four-pin Molex.
•The "front side" disk connectors are single-ported SAS/SATA.
•The "back side" disk connectors (on the rear of the backplane) are normal SAS/SATA ports.
•A "forward" breakout cable combines sets of three SAS/SATA connectors to a SFF-8087 SAS connector. There are a total of four of these cables, one for each motherboard, three disks each.
•Each breakout cable plugs into the midplane board. The midplane board has two SFF-8087 connectors for each motherboard - eight total. Normally, only one is used per motherboard.
•The midplane board connects to the interposer board using a large connector. Within the interposer, the disk channels are converted to SATA connectors. While the midplane has two SFF-8087 connectors per motherboard for a total of eight disk channels, only six are converted to SATA - four from one connector and two from the other. The midplane is part of the chassis while the interposer is part of the slide-out motherboard cartridge, so this connection is made or broken when the motherboard chassis is inserted or removed.
•Cables connect the SATA connectors on the interposer board to the connectors on the motherboard. In a standard system these are SATA to SATA cables but if a SAS mezzanine card is in use, then one of them is a SFF-8087 to SATA "forward" breakout cable.

Rephrased:
Backplane: SATA disk in, SATA out.
Breakout cable: SATA in, SFF-8087 out
Midplane: SFF-8087 in, large custom connector out
Interposer: Large custom connector in, SATA out
Motherboard: SATA in

Each motherboard has six disk connectors on its interposer board and so can be wired to up to six disks without losing the ability to hot swap motherboards in and out of the chassis. If you need more than six disks for one motherboard, you can make it work, but you'll have a frankenstein on your hands when you are done.
unfortunately it looks like anything over 6 drives per node breaks hotswap abilities of the sleds, didnt think about that.
 

root

New Member
Nov 19, 2013
23
0
1
What is the story with BIOS that can't be updated?

I remember reading earlier in the thread that some people experienced problems updating BIOS/FW using files available at Dell. So before placing an order on ebay I've decided to ask for confirmation that the BIOS is upgradeable. Both sellers I've asked, mobile_computer_pros and esisoinc told me that their servers are custom made (and they have like 10s different configurations of 6100) may not upgrade if I download standard BIOS upgrade files from dell.com

Does anybody know why it is like that and how to deal with it? Is there a way to update the BIOS on these servers? Is there a way to know which server accept updates and which one will not?

P.S. At least mobile_computer_pros offered to upgrade FW/BIOS to latest version but told me if I try to flash it myself and it fail then they will be not responsible even if it happen within 30 days DOA warranty they offer.

P.P.S. That's the latest at http://www.dell.com/support/drivers/us/en/555/Product/poweredge-c6100

Dell PowerEdge PEC6100 system BIOS - 1.71
Dell PowerEdge C6100 BMC Firmware - 1.33
Dell PowerEdge C6100 System Fan Controller Board Firmware - 1.20
 
Last edited:

chune

Member
Oct 28, 2013
119
23
18
NICE, how wasnt that name taken?? hahah I would pick it up from MCP and be done with it. There is alot of rumors circulating with the C product line as to whether they are DFS or real Dell units with valid service tags. Make the seller do the dirty work.
 

geeker342

New Member
Jul 17, 2013
22
1
3
Will the nodes accept non-ECC memory? This would only be temporary until I can buy ECC memory again but I have 64 GB of non-ECC sticks and a lead on a system without memory.
 

Clownius

Member
Aug 5, 2013
85
0
6
root its really hard to say i had 3 different configurations of nodes from 3 different sellers. Only one set of 4 actually showed up on Dells website but all 9 nodes upgraded fine..... Its hit and miss i guess and i got lucky.

Only upgrade the BIOS if you really have to is my advice. I only did it because i was swapping in L5639's and some of the nodes still had a 1.04 bios and bmc that wouldn't work with them.

In other words do you really need to upgrade the bios?

If MCP is offering to try it for you that also sounds like a good option.
 

root

New Member
Nov 19, 2013
23
0
1
I heard about problems with BMC and old firmware, that's why I started digging. I guess once the seller update it will be no need to do so.
 

root

New Member
Nov 19, 2013
23
0
1
Is 1100W PSU good for 12HDD, 8 L5639 & 192GB?

1100w PSU should be good. Here are the details from Dell's manual

1100w - For 4 nodes system.
Up to two processors, nine
hard drives, and nine
memory module


What power supply do I need for a 8x L5639 with 192GB RAM and 12HDDs? I will also use two LSI 9260-8i cards for a 2x6 HDD setup. Will be 1100W sufficient? I think here are some folks that are running similar configs, what is your experience? Since these will be a production servers I don't need any surprises..


How about 24-disk variant? I will probably have to go and replace it with 1400W PSUs, right? I am still thinking if I should go with 2.5" disks... I have a 2.5 year old Supermicro server with L5420 and 20GB memory that has 8x 15k Savvio and 8x 500GB Constelation drives that I can move to C6100 and add few more for additional storage.
 
Last edited:

Clownius

Member
Aug 5, 2013
85
0
6
What power supply do I need for a 8x L5639 with 192GB RAM and 12HDDs? I will also use two LSI 9260-8i cards for a 2x6 HDD setup. Will be 1100W sufficient? I think here are some folks that are running similar configs, what is your experience? Since these will be a production servers I don't need any surprises..


How about 24-disk variant? I will probably have to go and replace it with 1400W PSUs, right? I am still thinking if I should go with 2.5" disks... I have a 2.5 year old Supermicro server with L5420 and 20GB memory that has 8x 15k Savvio and 8x 500GB Constelation drives that I can move to C6100 and add few more for additional storage.
I would seriously consider 2x 1100W PSUs or even 2x1400W PSUs
 

root

New Member
Nov 19, 2013
23
0
1
I would seriously consider 2x 1100W PSUs or even 2x1400W PSUs
Most sellers at ebay sell these with 1100W PSU, that's why I wanted to know if that will work for my config. Why should I spend more for 1400W if the server with 12 drives and 8 CPUs on max with draw, say, 850-900W? It reminds me of my old computer system that has 700W PSU but was sucking 220W under full load....
 

PigLover

Moderator
Jan 26, 2011
3,184
1,545
113
Two 1100W PSUs will handle that load just fine. If you fill it with 12 7200 RPM disks too it might stress it a bit at initial spinup, but don't forget that you have TWO 1100w CPUs - meaning that if they are both running you can actually deliver almost 2200W less a bit for efficiencies of the power distribution board. If you go over the serviceable load of a single CPU you loose redundancy - but steady state with good input current you can still run.

But remember - you'll only get anywhere close to that if both PSUs are plugged into the same 15amp circuit...

Regarding the 1400W PSUs - they can only actually deliver 1400W if powered by 220vac. At 110v input they are limited to about 1050w due to input current limits. In fact, the 1100w version is current-limited to 1023W when fed by 110. So, for a USA based buyer there is absolutely no reason at all to go with the 1400W PSUs (unless, of course, you know a good electrician :)).
 

root

New Member
Nov 19, 2013
23
0
1
Two 1100W PSUs will handle that load just fine. If you fill it with 12 7200 RPM disks too it might stress it a bit at initial spinup, but don't forget that you have TWO 1100w CPUs - meaning that if they are both running you can actually deliver almost 2200W less a bit for efficiencies of the power distribution board. If you go over the serviceable load of a single CPU you loose redundancy - but steady state with good input current you can still run.

But remember - you'll only get anywhere close to that if both PSUs are plugged into the same 15amp circuit...

Regarding the 1400W PSUs - they can only actually deliver 1400W if powered by 220vac. At 110v input they are limited to about 1050w due to input current limits. In fact, the 1100w version is current-limited to 1023W when fed by 110. So, for a USA based buyer there is absolutely no reason at all to go with the 1400W PSUs (unless, of course, you know a good electrician :)).
Thanks for this info...I was looking purely at Watts and forgot that there are other things to consider :)

I was planning to fill it with 6x 15k rpm Cheetah's and 6x 4TB Ultrastars for storage. I will try to reduce the startup current by using drive spin-up sequencing control on LSI controllers.
 

root

New Member
Nov 19, 2013
23
0
1
1100w is going to be fine.

Aren't the backplanes in these SATA only?

Hmmm... according to specs, they support both:

Drive Bays and Hard Drives

  • 3.5” SATA (7.2K): 250GB, 500GB, 1TB, 2TB
  • 3.5” SAS (15K): 300GB, 450GB, 600GB
  • 3.5” NL SAS (7.2K): 1TB, 2TB, 3TB


I hope they can co-exist in one chassis, both groups of disk are connected to separate controllers so there is no problem with mixing disks at controller level.


EDIT: It looks like you're right, MiniKnight, there is a different part number for 3.5" SAS Backplane - V3X78... Can anybody confirm mixed SAS/SATA setup in one chassis?
 
Last edited:

PigLover

Moderator
Jan 26, 2011
3,184
1,545
113
Aren't the backplanes in these SATA only?
YMMV, I suppose, but all of these that I've seen on the used market have SAS backplanes. Most of them are coming off-lease from large to extremely large data center environments. I seriously doubt any of these original buyers would have considered a SATA only backplane for more than about 5 seconds (with most of those 5 seconds spent laughing).
 

dba

Moderator
Feb 20, 2012
1,477
184
63
San Francisco Bay Area, California, USA
...

EDIT: It looks like you're right, MiniKnight, there is a different part number for 3.5" SAS Backplane - V3X78... Can anybody confirm mixed SAS/SATA setup in one chassis?
Every c6100 backplane that I have seen had single-ported SAS drive connectors and were able to handle SAS or SATA drives. I have even mixed the two on several of my machines, both the c6100 and c6145.
 

geeker342

New Member
Jul 17, 2013
22
1
3
I'm calling you out PigLover :p

I'm buying into the C6100 club and need to migrate my current stack to these nodes. CPUs I have, HDD/SSD too, but ECC memory I do not. I plan to eventually (in a month or so) begin to fill the nodes but in the mean time need to check if non-ECC non-registered DIMMs will work in these systems. It looks like the Intel 5520 chipset supports it. Dell looks to have only shipped these with ECC memory because who runs a datacenter with non-ECC memory? (and who has two thumbs and is trying to, this guy!)

Will the nodes accept non-ECC memory? This would only be temporary until I can buy ECC memory again but I have 64 GB of non-ECC sticks and a lead on a system without memory.
 

root

New Member
Nov 19, 2013
23
0
1
Every c6100 backplane that I have seen had single-ported SAS drive connectors and were able to handle SAS or SATA drives. I have even mixed the two on several of my machines, both the c6100 and c6145.
YMMV, I suppose, but all of these that I've seen on the used market have SAS backplanes. Most of them are coming off-lease from large to extremely large data center environments. I seriously doubt any of these original buyers would have considered a SATA only backplane for more than about 5 seconds (with most of those 5 seconds spent laughing).
Thanks, guys!

So, there is no problem to mix both types of drives on a "standard" chassis. My setup requires 1.5TB fast local storage attached to one node for couple VMs (that's why I need 15k rpm SAS drives); and a second node with slow but large storage. 6x 4TB in RAID10 will give me 11+ TB space.
 

dba

Moderator
Feb 20, 2012
1,477
184
63
San Francisco Bay Area, California, USA
...
My setup requires 1.5TB fast local storage attached to one node for couple VMs (that's why I need 15k rpm SAS drives); and a second node with slow but large storage. 6x 4TB in RAID10 will give me 11+ TB space.
There is another option that I very highly recommend: Dump the 15K SAS drives and use one or more large SATA SSDs instead. Even a single SSD will out-perform a pile of SAS drives in a virtualization scenario. For my c6100 VM cluster, I use 512GB SSD drives to house my VMs, and I run up to 12 big fat Windows VMs per c6100 node without any noticeable slowness. I could very likely run many more. As you probably know, VM IO is usually limited by disk IOPS, and even really fast SAS drives will give you only ~200 IOPS while you gets tens of thousands of IOPS from even the worst SSD drives.

Here is another idea, since the c6100 has only three or six disk slots per node: I have stopped using any type of RAID for VM storage. Instead, I use simple SSD drives and rely on very frequent VM replication plus daily backups as my DR strategy. This decreases storage costs with only a small decrease in possible downtime and very small data loss window, which is perfectly acceptable for most (but not all) VMs. My c6100 "corporation in a box" setup has three c6100 nodes acting as VM hosts plus one one as a VM replica destination.
 

PigLover

Moderator
Jan 26, 2011
3,184
1,545
113
I'm calling you out PigLover :p

I'm buying into the C6100 club and need to migrate my current stack to these nodes. CPUs I have, HDD/SSD too, but ECC memory I do not. I plan to eventually (in a month or so) begin to fill the nodes but in the mean time need to check if non-ECC non-registered DIMMs will work in these systems. It looks like the Intel 5520 chipset supports it. Dell looks to have only shipped these with ECC memory because who runs a datacenter with non-ECC memory? (and who has two thumbs and is trying to, this guy!)
The 5500 chipset should support non-ECC UDIMMs. Unfortunately, I have never tried this so I can't confirm whether it works or not. I also can't point to anyone else who has had success with it.

I just sold & shipped my last C6100 so I can't test it for you either. Would love to hear your results.