Dell C6100 XS23-TY3 2U 4-Node (8 CPU) Cloud Server

Patrick

Administrator
Staff member
Dec 21, 2010
11,802
4,760
113
Hmmm. Was thinking you had done something more elegant!

I prefer sticky back velcro to double sided tape. Easier to remove things later for upgrade or repair.
Well elegant was going to be 2.5" adapters until I realized I purchased something with a similar layout to the above... when I got there.

Velcro can be a bit tricky due to spacing.
 

mulder

New Member
Feb 9, 2013
31
0
0
Hi Dragon,
It looks like those HDD caddies are compatible with the Dell PowerEdge 700, 800, 1600, 1600SC, 1800, 1850, 2600, 2650, 2800, 2850, 3250, 6600, 6650, 6800, 6850, 7150 & PowerVault 220S, 221S, 220F, 650F, 660F searching by the part number. They won't fit into a C6100.

Regards,
 
Last edited:

Dragon

Banned
Feb 12, 2013
77
0
0
Hi Dragon,
It looks like those HDD caddies are compatible with the Dell PowerEdge 700, 800, 1600, 1600SC, 1800, 1850, 2600, 2650, 2800, 2850, 3250, 6600, 6650, 6800, 6850, 7150 & PowerVault 220S, 221S, 220F, 650F, 660F searching by the part number. They won't fit into a C6100.

Regards,
Yeah someone from Dell decided it's a good idea to make the c6100 tray open to the left while almost every other Dell server in recent years open to the right. If the plastic doors on those cheap trays can be modified to fit upside down that'd be great.
 

McKajVah

Member
Nov 14, 2011
46
8
8
Norway
This maybe a dumb question, but what kind of RAM does the C6100 take.

I've found some crucial 4gb with the following specs.:

Company Program: Crucial
Configuration: 512Meg x 72
DDR Timings: CL=9
DIMM Type: Registered
Density: 4GB
Error Checking: ECC
Megabytes: 4096
Memory Type: DDR3 PC3-10600
Module Rank: Dual Ranked
Package: 240-pin DIMM
Platform Category: DDR3
Speed: DDR3-1333

When I check the guide on Crucial's site, they are not compatible... WHY??
 

dba

Moderator
Feb 20, 2012
1,478
181
63
San Francisco Bay Area, California, USA
Notes that will be helpful for others working with Infiniband on the C6100, especially using Windows:

I was unable to get the Infiniband cards to work properly on my C6100. Everything appeared to work, but the ports always showed as disconnected. Further, the card would suddenly cease to be a device for an hour or two. I did finally get it working, and here is how:

1) Update the C6100 BIOS to 1.69 on each motherboard.
2) Update the C6100 BMC Firware to 1.29 on each motherboard.
3) Update the fan control board firmware to 118 once - fan board is shared across all motherboards.
4) Install WinMFT 2.7.2 on each motherboard
5) Download and extract new IB firmware to the WinMFT folder on each server. Resulting file is named fw-ConnectX2-rel-2_9_1000-059MP7.bin
6) Re-flash the IB card by running the following from within the WinMFT folder: flint -d mt26428_pci_cr0 -i fw-ConnectX2-rel-2_9_1000-059MP7.bin burn
7) Reboot
8) Install Mellanox WinOF VPI IB driver software. I installed MLNX_VPI_WinOF-3_2_0_win7_x64.exe
9) Reboot
 

McKajVah

Member
Nov 14, 2011
46
8
8
Norway
Hi Patrick,
Can you confirm the PowerEdge C6100 bios/esm updates work on your Dell C6100 XS23-TY3? Can't add pictures to this thread to show but when updating my Dell C6100 XS23-TY3 I get the error msg from the installer:

This update package is not compatible with your system
Your System: XS23-TY3
System(s) supported by this package: Cloud Products C6100

I had a long talk with Dell about this & was told the Dell XS23-TY3 systems were built for a specific customer and the bios cannot be updated. Mine has version 1.50 installed. I confirmed prior to purchasing my server was eligible for all C6100 updates based on the service tag lookup @dell.com. Tried every bios version from 1.56 -> 1.69 without luck.

http://imageshack.us/photo/my-images/7/bioswdkxkwn32169a01.png/
http://imageshack.us/photo/my-images/823/esmfirmware4wcd9129a01.png/

Regards,
Mulder: Did you manage to update the bios??

I can see that "dba" managed to update his in the last post?

Dba: Did you use the windows utility or Dos version when you upgraded the bios?
 

dba

Moderator
Feb 20, 2012
1,478
181
63
San Francisco Bay Area, California, USA
Infiniband update:

Using IP over Infiniband on Win2008R2 with all default settings (no tuning) I get 1,960MB/Second throughput for reads and more IOPS than you can imagine. This is far short of what you'd get using a lightweight protocol, but i'ts fantastic considering how easy it is to utilize Infiniband when it's just emulating an Ethernet adapter.
 

Patrick

Administrator
Staff member
Dec 21, 2010
11,802
4,760
113
Hi, have you an idea if the passthrough will work with esxi?
With the price per node there is no more need to do a virtual San appliance... But always interesting to know :)
It should. vt-d was in that generation Xeon devices.
 

mulder

New Member
Feb 9, 2013
31
0
0
I used the Windows executable. It's automatic once you run it and takes about two minutes.
I sent my XS23-TY3 back Tuesday ordered one from another vendor who guaranteed the bios/ecm/bmc/fcb can be upgraded & will have it updated to the latest versions. Should have it by end of next week. Whoever Dell DCS built the server I just sent back wanted the bios locked down.

Regards,
 
Last edited:

mrkrad

Well-Known Member
Oct 13, 2012
1,244
52
48
Always use RDIMM - it protects the address and data lines, plus has far less load (more ranks usable). $65 for 8gb of RDIMM average going price. Remember that this gen of xeon will typcally drop to 800mhz ram at 3 dimms (if possible), and sometimes (HP does it) you can overclock the ram when 2DPC is used.

There are two 5520 chipsets, the original one really buggy and you'd get random ecc errors if fully populated to 3DPC (aka 9 dimms per socket), so many folks just only allowed 6 dimms per socket. The early 5520 chipset also did not support the features of the 5600 series cpu's (AES/LV-dimm), so you'd boot up and see that your shiny 6-core has no AES support :( These were replaced for the most part since the early chip set had the buggy ram timing issues and most certainly wouldn't do 9 large dimms for 144gb (per socket!). This is important to remember! For example HP would under warranty, replace the motherboard for folks who complained and had the old chipset. Dell would spend hours diverting your attention because they didn't want to eat the cost and would ask you for proof of ownership,etc (pisses me off still).

Check out MSA2312SA $499[ebay] - these can support 3gbps*4 links (4 of them) and 12 3.5" drives. ESXi supports scsi or ATS locking fine with them. Microsoft uses them for 4-way clusters in their stores. The newer model P2000 G3 ups the port count to 8 and 6gbps*4 (24gbps * 8).

So yeah if you could rig up a MSA2312sa one SAS port to each node, the dual controller device would give you real san, fill her up with RE4 4TB SAS drives, no need for VSA. DAS baby DAS :) $499 - go check it out. expandable with more MSA70/60 boxes chained. Might not be super fast but 146gb SAS 15K drives are stupid cheap these days since nobody wants them. These are not JBOD, but true DUAL controller raid systems.
 

Toddh

Member
Jan 30, 2013
120
8
18
I bought two of the C6100 from pdneiman and he was great. Really good communication and he made me a great deal on the infiniband ConnectX cards. HOWEVER looking for infiniband switch I had a long conversation with an eBay seller from Carlton Tx asking about all the stuff coming out of there. We got on the topic that I had just bought some C6100s and he said he had them too on ebay for only $899. Told him I never saw them??? Here is the link.

http://www.ebay.com/itm/DELL-POWERE...123928?pt=COMP_EN_Servers&hash=item19d8e5d758

So he has everything I need, barebones servers, ram, hd cages, the Infiniband switch and he made me a deal that was better than piecing everything together from separate vendors, one stop shopping.

BTW - Dragon, VistaComputer has the hd trays for $15. You can also ask the above seller.


.
 

PigLover

Moderator
Jan 26, 2011
2,911
1,231
113
...There are two 5520 chipsets, the original one really buggy and you'd get random ecc errors if fully populated to 3DPC (aka 9 dimms per socket), so many folks just only allowed 6 dimms per socket...
Luckily (or not, if you really wanted a large memory config) the MB on the C6100 node only deploys 2 banks/CPU (6 RDIMM/CPU) so no matter which rev of the 5520 it is you don't tickle this bug.
 

Jeggs101

Well-Known Member
Dec 29, 2010
1,480
222
63
Unixsurplus is way overpriced.

That seller dropped prices $75. Used to be 975
 

McKajVah

Member
Nov 14, 2011
46
8
8
Norway
There are two 5520 chipsets, the original one really buggy and you'd get random ecc errors if fully populated to 3DPC (aka 9 dimms per socket), so many folks just only allowed 6 dimms per socket. The early 5520 chipset also did not support the features of the 5600 series cpu's (AES/LV-dimm), so you'd boot up and see that your shiny 6-core has no AES support :( These were replaced for the most part since the early chip set had the buggy ram timing issues and most certainly wouldn't do 9 large dimms for 144gb (per socket!). This is important to remember! For example HP would under warranty, replace the motherboard for folks who complained and had the old chipset. Dell would spend hours diverting your attention because they didn't want to eat the cost and would ask you for proof of ownership,etc (pisses me off still).
Is it possible to check the version/rev. number of the 5520 chipset?