Dell C6100 XS23-TY3 2U 4-Node (8 CPU) Cloud Server

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

PigLover

Moderator
Jan 26, 2011
3,184
1,545
113
Just with this remember that the 5500/ 5600 series Xeons are going to be very sensitive to the number of DIMMs populated per channel. Not a huge deal for most applications but if you fill the DIMM slots with 1066MHz quad rank memory you end up running at 800MHz DDR3 max on the 5500 series as an example. In many cases more memory is going to be better than faster but it does depend on what you are running.

Todd, who did you buy from? I had the 3-4 day process time on my first one also. I ordered that one from pnieman. Also when I was at UnixSurplus/ MrRackables I got the impression that the CPUs/ memory were pulled from the units also so they would need to be configured.
The 55xx/56xx Xeon's used on these boards have another limitation with memory: if you use 1.35v RDIMMs you can only populate 2 of the three banks per CPU (6 sticks per CPU instead of all 9). I learned this limitation the hard way on my SuperMicro dual-Xeon workstation build (forgot to RTFM...oops). If you plan to populate all three banks make sure you are using 1.5v RDIMMs.
 
Last edited:

PigLover

Moderator
Jan 26, 2011
3,184
1,545
113
BTW, mine appears to have arrived. Got a motion-detect clip from my front door camera of the FedEx guy dropping off a ridiculously large box...
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,511
5,792
113
BTW, mine appears to have arrived. Got a motion-detect clip from my front door camera of the FedEx guy dropping off a ridiculously large box...
I have two of those in the car right now ready to ship out to the DC. Eagerly awaiting your thoughts.
 

PigLover

Moderator
Jan 26, 2011
3,184
1,545
113
I have two of those in the car right now ready to ship out to the DC. Eagerly awaiting your thoughts.
It might take me a while to form up anything coherent. Too many projects in the hopper. Before all else I'm going to spend a bit of time with it working out a good way to partially silence the fans. I'll be running L-series CPUs and I won't have any fast/hot 15k SAS drives in the front. In fact, I'm going to experiment with PXE and/or iSCSI boot and not have any local drives at all. Those high pressure fans are terrible overkill. It will live in unoccupied space but I'm afraid that 66-75 dba noise profile will be audible through the walls. Need to get it down to about 50 dba (or lower).

I managed to get my dual-X5500 system (with 13 spinny drives and 4 SSD) down to 32 dba and it runs cool. But that used a large volume case and heatpipe/tower coolers. I have to have more modest goals with 8 CPUs in a 2U rackmount.

I'll try something easy first - just resistors on the existing fans. My worry is that the fan control board will try to outsmart me...
 
Last edited:

gigatexal

I'm here to learn
Nov 25, 2012
2,913
607
113
Portland, Oregon
alexandarnarayan.com
Today is ambigous day. First I post ambiguously, and then I get reamed at my internship for being ambiguous in emails.

I was thinking more of the traditional harddisk install of an OS but the usb OS idea makes a lot of sense.

Your question could be interpreted two ways. A disk (hard disk) to install the OS to? Or a disk (CD or otherwise) to install OS from?

First version probably isn't what you were asking, but each server (MB) has 3 (3.5" model) or 6 (2.5" model) disk slots on the front dedicated to it.

Second way to interpret your question is more interesting. Is there a removable (CD or otherwise) to install OS from? Yes - its on your PC/LapTop/etc that you manage them from and you remote-attach it via IPMI. There are also USB slots on the back if you prefer using a USB install or portable CD.
 

gigatexal

I'm here to learn
Nov 25, 2012
2,913
607
113
Portland, Oregon
alexandarnarayan.com
It might take me a while to form up anything coherent. Too many projects in the hopper. Before all else I'm going to spend a bit of time with it working out a good way to partially silence the fans. I'll be running L-series CPUs and I won't have any fast/hot 15k SAS drives in the front. In fact, I'm going to experiment with PXE and/or iSCSI boot and not have any local drives at all. Those high pressure fans are terrible overkill. It will live in unoccupied space but I'm afraid that 66-75 dba noise profile will be audible through the walls. Need to get it down to about 50 dba (or lower).


I managed to get my dual-X5500 system (with 13 spinny drives and 4 SSD) down to 32 dba and it runs cool. But that used a large volume case and heatpipe/tower coolers. I have to have more modest goals with 8 CPUs in a 2U rackmount.

I'll try something easy first - just resistors on the existing fans. My worry is that the fan control board will try to outsmart me...
your fan choices will be interesting because if you can get them to 50dba or lower that would be interesting to me too.
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,511
5,792
113
It might take me a while to form up anything coherent. Too many projects in the hopper. Before all else I'm going to spend a bit of time with it working out a good way to partially silence the fans. I'll be running L-series CPUs and I won't have any fast/hot 15k SAS drives in the front. In fact, I'm going to experiment with PXE and/or iSCSI boot and not have any local drives at all. Those high pressure fans are terrible overkill. It will live in unoccupied space but I'm afraid that 66-75 dba noise profile will be audible through the walls. Need to get it down to about 50 dba (or lower).

I managed to get my dual-X5500 system (with 13 spinny drives and 4 SSD) down to 32 dba and it runs cool. But that used a large volume case and heatpipe/tower coolers. I have to have more modest goals with 8 CPUs in a 2U rackmount.

I'll try something easy first - just resistors on the existing fans. My worry is that the fan control board will try to outsmart me...
That would be SUPER interesting! Will buy one for the lab if we can figure that out.
 

Madhelp

Member
Feb 7, 2013
39
25
18
Does anyone know if the backplane can be rewired to send all of the SAS ports to an HBA card on one of the four nodes?
 

PigLover

Moderator
Jan 26, 2011
3,184
1,545
113
As delivered you can fairly easily wire it for a "node" in the chassis to support 6 drive slots. You could also physically wire it to support more than 6 drives, but you would be left with a configuration where you can't slide out the node w/out taking off the lid and removing the SAS/SATA cables first.

The drive connections are wired through an "interposer card" that and a connector between the MB tray and the node. The interposer card only has 6 connectors for drives on it (amusingly, mine was also shipped with a 1068e-base SAS mezzanine care and Dell designed it to expose exactly 6 of the 8 ports the chip natively supports - 4 on an 8087 connector and then two more on individual SAS/SATA connectors). They appear to have done this for to support a version with only two nodes, each node connected to 4 drives.

I would imagine that there exists a 12-port (or at least an 8 port) version of the interposer card used on the 2.5" drive chassis. I was actually hoping to locate one because I think - long term - that I may want a 4-node system with 8 drives on one node and each of the others having exactly one drive. Not top of my priority list but something for later.
 

Madhelp

Member
Feb 7, 2013
39
25
18
As delivered you can fairly easily wire it for a "node" in the chassis to support 6 drive slots. You could also physically wire it to support more than 6 drives, but you would be left with a configuration where you can't slide out the node w/out taking off the lid and removing the SAS/SATA cables first.

The drive connections are wired through an "interposer card" that and a connector between the MB tray and the node. The interposer card only has 6 connectors for drives on it (amusingly, mine was also shipped with a 1068e-base SAS mezzanine care and Dell designed it to expose exactly 6 of the 8 ports the chip natively supports - 4 on an 8087 connector and then two more on individual SAS/SATA connectors). They appear to have done this for to support a version with only two nodes, each node connected to 4 drives.

I would imagine that there exists a 12-port (or at least an 8 port) version of the interposer card used on the 2.5" drive chassis. I was actually hoping to locate one because I think - long term - that I may want a 4-node system with 8 drives on one node and each of the others having exactly one drive. Not top of my priority list but something for later.
Thank you so much PigLover.
 

Toddh

Member
Jan 30, 2013
122
10
18
Servers came in today. Man they are heavy. To be expected with everything they pack inside but still surprised me.

Patrick, yes I ordered mine from pdneiman. Paid Monday and they arrived today. They came from Dallas and I am in Houston so a day shipping. I am curious, I see a lot of used hardware coming from different sellers in Carlton( a section near Dallas) TX. Must be a district up there that specialized in off lease equipment.

Now to find and infiniband switch.
 

Toddh

Member
Jan 30, 2013
122
10
18
The ebay seller with the ram is kanamura. He doesn't have any auctions right now but he does have ram available. He had 144Gb of 8Gb dimms in and R710 and stated it is 2rx4. He listed the ram as follows

Technical Specs
8GB 2Rx4 PC3L-10600R-09-10
PC3
10600R
ECC

Will work in Dell PowerEdge R710, R610, R410, T710, T610, T410
Will work in DELL PRECISION WORKSTATION T7500 T5500
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,511
5,792
113
Servers came in today. Man they are heavy. To be expected with everything they pack inside but still surprised me.
Yea I had to re-pack the servers at 4AM on Wednesday. Much harder trying to fit the second time. Was a good pre-workout though.

Off to the airport to go see them in the DC!
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,511
5,792
113
On plane. Looking for some feedback. Is this type of thread helpful?
 

rubylaser

Active Member
Jan 4, 2013
846
236
43
Michigan, USA
Very helpful. I picked up one too. It might be nice to provide users some directions for using the IPMI features in your first thread though.
 

Toddh

Member
Jan 30, 2013
122
10
18
Given the infiniband cards are the half the price of the 10Gbe Ethernet it's too bad Dell didn't use one of the Mellanox cards that will do either Infiniband or 10Gbe.


.
 

Toddh

Member
Jan 30, 2013
122
10
18
Thread has been very useful. I am looking forward as time moves on to see what people use them for and there configuration experiences.
 

cactus

Moderator
Jan 25, 2011
830
75
28
CA
Given the infiniband cards are the half the price of the 10Gbe Ethernet it's too bad Dell didn't use one of the Mellanox cards that will do either Infiniband or 10Gbe.
From Patrick's post he linked these. They look to be ConnectX-2 VPI cards, thus EN or IB. wuffers asked Mellanox and confirmed when set to EN they work as true Ethernet.
 

mulder

New Member
Feb 9, 2013
31
0
0
<snip>

Dell C6100 XS23-TY3 Drivers

Here is a link to current Dell drivers. Most components are very standard.
Hi Patrick,
Can you confirm the PowerEdge C6100 bios/esm updates work on your Dell C6100 XS23-TY3? Can't add pictures to this thread to show but when updating my Dell C6100 XS23-TY3 I get the error msg from the installer:

This update package is not compatible with your system
Your System: XS23-TY3
System(s) supported by this package: Cloud Products C6100

I had a long talk with Dell about this & was told the Dell XS23-TY3 systems were built for a specific customer and the bios cannot be updated. Mine has version 1.50 installed. I confirmed prior to purchasing my server was eligible for all C6100 updates based on the service tag lookup @dell.com. Tried every bios version from 1.56 -> 1.69 without luck.

http://imageshack.us/photo/my-images/7/bioswdkxkwn32169a01.png/
http://imageshack.us/photo/my-images/823/esmfirmware4wcd9129a01.png/

Regards,
 
Last edited:

Jeggs101

Well-Known Member
Dec 29, 2010
1,529
241
63
Hi Patrick,
Can you confirm the PowerEdge C6100 bios/esm updates work on your Dell C6100 XS23-TY3? Can't add pictures to this thread to show but when updating my Dell C6100 XS23-TY3 I get the error msg from the installer:

This update package is not compatible with your system
Your System: XS23-TY3
System(s) supported by this package: Cloud Products C6100

I had a long talk with Dell about this & was told the Dell XS23-TY3 systems were built for a specific customer and the bios cannot be updated. Mine has version 1.50 installed. I confirmed prior to purchasing my server was eligible for all C6100 updates based a service tag lookup @dell.com. Tried every bios version from 1.56 -> 1.69 without luck.

Regards,
Good info.

Wondering if we can have everyone try this and post BIOS versions then reverse out customers.

If ya can't upgrade tells me they are going to be stable BIOS.
 
Last edited: