Dell C6100 XS23-TY3 2U 4-Node (8 CPU) Cloud Server

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

PigLover

Moderator
Jan 26, 2011
3,184
1,545
113
If anybody is looking for the 6-SATA cables pointed out by dba - i found them off-ebay here. This was lower than I could on ebay a few weeks ago, though I haven't checked recently. I needed 3-drive sata cables for mine and these were actually less expensive than i could find the 3-drive harness.

Still - at $80 for all four sleds - its a significant fraction of what you paid for the chassis...
 

s0lid

Active Member
Feb 25, 2013
259
35
28
Tampere, Finland
Hey PigLover i saw your C6100 setup on [H]. Just little question about the mellanox 10Gig nics, have you found a source for those cards Low Profile brackets?
 

PigLover

Moderator
Jan 26, 2011
3,184
1,545
113
No. Right now they are sitting in the chassis bracket-less. Not good as vibration and temp. changes will tend to push them out of the socket over time (I did tie-off the DAC cables to try and stabilize them a bit).

I have a few feelers out for LP brackets. May not matter in then end as I am getting intel-based mezz cards in the next couple of days and will be pulling the Mellanox ENs back out. They may end up back on eBay soon.
 

dba

Moderator
Feb 20, 2012
1,477
184
63
San Francisco Bay Area, California, USA
Re-posting from another thread:

Do you need 2.5" sleds for your Dell C6100? Shocked at the price? Now there is a far less expensive option.

You can find 2.5" drive sleds for a Dell C6100 on eBay, but they are some of the most expensive of all drive sleds - $20 to $30 each. The good news is that HP drive sleds fit - and fit perfectly. The sleds you need are those from any G5, G6, or G7 DL series server or MSA70 drive chassis. I'd post a photo but image upload is not supported in this forum. Luckily for you, these are some of the most popular drive sleds in the world and some of the least expensive. I just picked up 48 sleds for $220 - less than $5 each.
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,511
5,792
113
Re-posting from another thread:

Do you need 2.5" sleds for your Dell C6100? Shocked at the price? Now there is a far less expensive option.

You can find 2.5" drive sleds for a Dell C6100 on eBay, but they are some of the most expensive of all drive sleds - $20 to $30 each. The good news is that HP drive sleds fit - and fit perfectly. The sleds you need are those from any G5, G6, or G7 DL series server or MSA70 drive chassis. I'd post a photo but image upload is not supported in this forum. Luckily for you, these are some of the most popular drive sleds in the world and some of the least expensive. I just picked up 48 sleds for $220 - less than $5 each.
Added the info on these to the 1st post.
 

seang86s

Member
Feb 19, 2013
164
16
18
FYI, I've used Western Digital "Icepacks" for this purpose in the past. They are the 3.5 frame from 2.5 inch Velociraptors that people have removed for whatever reason. Slightly more expensive than the Dell converters but if you want something of higher quality this may do the trick.

wd icepack | eBay

Hi Patrick,

One caution regarding the Dell 2.5" to 3.5" converters: The quality control is not very good. One of my four had mounting holes drilled 2mm too far forward, which made the SATA connection intermittent. If anyone buys these, be sure to double-check theirs. A bit of filing fixed my issue, but I have to say that I was disappointed.
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,511
5,792
113
FYI, I've used Western Digital "Icepacks" for this purpose in the past. They are the 3.5 frame from 2.5 inch Velociraptors that people have removed for whatever reason. Slightly more expensive than the Dell converters but if you want something of higher quality this may do the trick.

wd icepack | eBay
Great call-out. I wonder if the OEM sells these under a label other than WD.
 

seang86s

Member
Feb 19, 2013
164
16
18
Great call-out. I wonder if the OEM sells these under a label other than WD.
Oh, and one other plus is that the Icepack will give you mounting holes on the side and well as the bottom for other applications than the Dell C6100. Not sure if the Dell adapter accomodates this.

I had a few 5 in 3 hotswap cages that had sleds with bottom mounting holes. I used these Icepacks to mount 2.5 drives in those sleds.
 

dba

Moderator
Feb 20, 2012
1,477
184
63
San Francisco Bay Area, California, USA
That's an excellent idea - cheap, solid looking, and universal. I like it!

FYI, I've used Western Digital "Icepacks" for this purpose in the past. They are the 3.5 frame from 2.5 inch Velociraptors that people have removed for whatever reason. Slightly more expensive than the Dell converters but if you want something of higher quality this may do the trick.

wd icepack | eBay
 

badatSAS

Member
Nov 7, 2012
103
0
16
Boston, MA
I've been using the MB882SP-1S-1B in my supermicro 3.5 trays and they've been working perfectly for me, should work in anything for 2.5->3.5 conversion. One nice thing is there's no screws involved in putting the disk in - it's held in with a clip and some pressure. But the icepacks look a LOT more hardcore
 

cafcwest

Member
Feb 15, 2013
136
14
18
Richmond, VA
Ok, a few more questions for those with complete nodes:


1. The black plastic shroud (pictured in 3 of the 8 nodes in this picture, over CPU 0). Other than airflow, any purpose? Anyone running without these? Any issues?




2. I got my 2.5" node SATA cables in today. As bulky as the cable is, it doesn't exactly lay flat, or bend very well. As can be seen in the pictures, the cable itself extends outside the 'profile' of the node, meaning anytime the node is inserted into the host, the cable is going to be getting hanging out/getting in the way/etc. In the factory configuration, is there some zip tie/fastener/bracket that keeps the SATA cable more 'inside' the node?






I bought my two chassis from Justin @ Vista. He has been great to work with, sending me out my missing parts right away with no questions asked. But as much as I wanted the 24 x 2.5" chassis, I am really starting to feel like I should have just bought a complete server and have been done with this already. If this were just tinkering, I wouldn't mind (as much) tracking down all the little bits and such. But as I am trying to test some work-related ideas, I really want to get to work already. Food for thought for anyone in a similar situation.
 

Dragon

Banned
Feb 12, 2013
77
0
0
2. I got my 2.5" node SATA cables in today. As bulky as the cable is, it doesn't exactly lay flat, or bend very well. As can be seen in the pictures, the cable itself extends outside the 'profile' of the node, meaning anytime the node is inserted into the host, the cable is going to be getting hanging out/getting in the way/etc. In the factory configuration, is there some zip tie/fastener/bracket that keeps the SATA cable more 'inside' the node?
Try one of those ultra thin sata cables, the ones from Orico are quite popular around here.



If it's not available on your side, I am sure other companies make them too, here is what I found after a quick search, that post is outdated but the description are still accurate.


CABLES-ATX, PCIE, FAN, SATA, ESATA, DP, TOSLINK, MOLEX, AV, CABLE MODDING KITS



Item CAB-23. 90cm Ultra Slim SATA3.0 Flat Cable Straight to Right L Angle with metal latch - BLUE
Specifications:
Type: Server Grade / SAS
Length: 90cm
Colour: Blue
Connectors: Straight to Right L Angle
5 of these cables bundled together = same thickness as 1 normal SATA flat cable
Server Grade Quality. Much better than normal SATA cables
***All the SATA Cables are compatible with SATA 3.0Gbps & SATA 6.0Gbps***
Brand New
FIXED: $8 each


Item CAB-24. 60cm Ultra Slim SATA3.0 Flat Cable Straight to Right Angle with metal latch - BLUE
Specifications:
Type: Server Grade / SAS
Length: 60cm
Colour: Blue
Connectors: Straight to Right Angle
5 of these cables bundled together = same thickness as 1 normal SATA flat cable
Server Grade Quality. Much better than normal SATA cables
***All the SATA Cables are compatible with SATA 3.0Gbps & SATA 6.0Gbps***
Brand New
FIXED: $8 each
 

dba

Moderator
Feb 20, 2012
1,477
184
63
San Francisco Bay Area, California, USA
The plastic shroud appears to be for airflow only. It appears to direct air away from the PCIe slot toward the IOH and mezzanine card. I would assume that it is an important piece of gear under some circumstances, probably when using the hottest CPUs, but with the low-watt Xeon CPUs like the 5520 I'd feel fine about leaving it out entirely.

Carefully work the cable some more, aiming for consistent twists and bends that tend to push the middle of the cable down as opposed to making it push up as in your photos. There are no fasteners, but it should stay in place quite well. Also, the right-angle cable on the mobo side (as opposed to the interposer side) should be twisted so that the blue cable is always "under" the black connector - this makes room for a PCIe card, which lays right on top of that SATA connector.

Ok, a few more questions for those with complete nodes:

1. The black plastic shroud (pictured in 3 of the 8 nodes in this picture, over CPU 0). Other than airflow, any purpose? Anyone running without these? Any issues?

2. I got my 2.5" node SATA cables in today. As bulky as the cable is, it doesn't exactly lay flat, or bend very well. As can be seen in the pictures, the cable itself extends outside the 'profile' of the node, meaning anytime the node is inserted into the host, the cable is going to be getting hanging out/getting in the way/etc. In the factory configuration, is there some zip tie/fastener/bracket that keeps the SATA cable more 'inside' the node?

...
 
Last edited:

RimBlock

Active Member
Sep 18, 2011
837
28
28
Singapore
For those who have not read it, there is quite a nice writeup on the C6100 here. It was from 2010 when the guy was onboarding a number of the units.

RB
 

acesea

New Member
Oct 7, 2011
8
1
3
From Patrick's post he linked these. They look to be ConnectX-2 VPI cards, thus EN or IB. wuffers asked Mellanox and confirmed when set to EN they work as true Ethernet.
So any advantages to getting the 10gbe TCK99 versus the QDR IB JR3P1 if we're just going to use ethernet but like to have extra options? Is the JR3P1 going to be less performant than the TCK99 for ethernet?

Infiniband update:

Using IP over Infiniband on Win2008R2 with all default settings (no tuning) I get 1,960MB/Second throughput for reads and more IOPS than you can imagine. This is far short of what you'd get using a lightweight protocol, but i'ts fantastic considering how easy it is to utilize Infiniband when it's just emulating an Ethernet adapter.
If that's the throughput of the TCK99 it looks to be better than a 10gbe adapter. So the QDR IB JR3P1 interface connection is same as TCK99? SFP+ with twin axial copper links? And after installing the JR3P1 does a menu appear in the bios for quickly changing its function from IB to EN?



There are two 5520 chipsets, the original one really buggy and you'd get random ecc errors if fully populated to 3DPC (aka 9 dimms per socket), so many folks just only allowed 6 dimms per socket. The early 5520 chipset also did not support the features of the 5600 series cpu's (AES/LV-dimm), so you'd boot up and see that your shiny 6-core has no AES support :( These were replaced for the most part since the early chip set had the buggy ram timing issues and most certainly wouldn't do 9 large dimms for 144gb (per socket!). This is important to remember! For example HP would under warranty, replace the motherboard for folks who complained and had the old chipset. Dell would spend hours diverting your attention because they didn't want to eat the cost and would ask you for proof of ownership,etc (pisses me off still).
Never heard of this. Any links for additional info or how to identify trouble chipsets? Would hate to get 5600 cpus and not get AES.


Question about each nodes pcie slot: What card and bracket fit? Low profile card with full height bracket?
 
Last edited:

ifccnb

New Member
Mar 9, 2013
2
0
0
rendering nodes

Hello server folks,

I would like to ask for some advice regarding these units. First I want to preface that I do have some experience with computer hardware and software. I have built and tinkered with my own desktop platforms (mostly windows). I don't mind getting my hands dirty, however I don't have any experience with server hardware such as the c6100.

I am into graphic design and use software like 3ds max, adobe suite and the like. I have been looking into a solution for offloading rendering tasks. I was thinking of putting a project like this together for my home office. ---> Link My thoughts were along the lines of 4, bang for the buck, intel "ivy bridge" nodes. With only the essential mb, cpu, ram, psu and hd. Price per node is between $500 - $600 with an additional $150 or so for extras.

I must say that the Dell C6100 XS23-TY3 has peaked my curiosity. 4 nodes 8 cpus 32 cores 96gb of ddr3 @ $900+shipping+the cost of 4 hds... on the surface there seems to be a good cost/performance ratio! All packed into one space saving, albeit loud, unit.

Questions as I'm weighing my options:
1. OS - I know the C6100 hardware only "officially" supports server grade operating systems. I apologize if this is a silly question but would windows 7 64bit be an option? Isn't windows server 2008 r2 a "beefy" version of windows 7. I would prefer a windows solution as I fear the penguin. :eek:

2. OS installation - Can it be as easy as plugging in a usb dvd drive and setting the bios to boot from it?

3. cpu performance - At stock speeds how do 2x L5520's compare to, lets say an i7-3770K (1155 ivy bridge) I have seen some bench mark scores and it looks close but I'm not sure how reliable the sources are or if they even apply to my situation.

4. future proofing (well... to an extent) - On the C6100 XS23-TY3 there is an upgrade path to the 6 core 5600 series of processors. They are still out of my price range but in the future, who knows. I'm not sure what the upgrade path on a socket 1155 ivy bridge is at the moment.

5. reliability - What kind of life span can expect from a refurbished C6100? I know. Nobody can answer that, it's a crap shoot even with new hardware. Perhaps someone can share some of their experiences with these servers.

So I guess this post has turned from a few Q&A's to a "what would you recommend" in my situation.
Option 1 - Roughly $2400 for 4 nodes with new hardware and an OS that I am accustomed to.
Option 2 - $900+shipping+the cost of 4 hds. Attempt to maximize value by using refurbished hardware. Put in the effort and become familiar with serve hardware and OS. I am not opposed to learning on the fly but I also don't want to get consumed by it.

Thanks for taking the time to read this long post. I appreciate any feedback.

P.S. Is there anything that I have overlooked or need to consider regarding the C6100 XS23-TY3?
 
Last edited:

gigatexal

I'm here to learn
Nov 25, 2012
2,913
607
113
Portland, Oregon
alexandarnarayan.com
Patrick said that two l5520s at stock are at least as fast as an OC'd 3770K in F@H due to the nature of the application taking advantage of threading. More threads = better. Single threaded apps will likely suffer by the lower clock speed. But if you are happy with loud fans you can get higher clocked xeon 1366 chips and go from there.