Dell C6100 XS23-TY3 2U 4-Node (8 CPU) Cloud Server

McKajVah

Member
Nov 14, 2011
46
8
8
Norway
Hi.

Could someone with the C6100 do a measurement for me? I want to know how long the chassis is from the back and to the back of the fans (basically the length of the moterboards).

I'm thinking of cutting away the whole drive assembly to shorten the chassis... :)

-Kaj
 

McKajVah

Member
Nov 14, 2011
46
8
8
Norway
Is it possible to enable VT-d in bios? Can't seem to find anything in the hardware owner's manual.

Did anyone other than "mulder" try to upgrade the bios?

-Kaj
 

mulder

New Member
Feb 9, 2013
31
0
0
Last edited:

Dragon

Banned
Feb 12, 2013
77
0
0
Been digging around in the past 2 days looking for ways to fit some extra SSDs inside each node in the Dell C6100, so far there seems to be 2 viable solutions.

Although these cards only allow you to add 1 mSATA SSD inside each node (I was looking for a way to add 3 SSDs), at least now your 3 mechanical disks under ZFS won't choke when your DB write heavily into it, and you can now sleep better at night knowing a 3-way ZFS mirror is better than a 2-way.


Koutech IO-PESA238 PCI-Express 2.0 x1 Low Profile Ready SATA III (6.0Gb/s) Dual Channel Controller Card with HybridDrive Support (1 x Int+1 x mSATA)
http://www.neweggbusiness.com/Product/Product.aspx?Item=N82E16816104024



Two 6.0Gbps SATA III channels
One (1) internal 7-pin SATA III connector
One (1) internal mSATA (mini PCIe) socket
Supports all mSATA (mini-SATA) SSD devices at 1.5/3.0/6.0 Gbps
Compliant with 5Gb/s PCI Express 2.0 specifications
Transfer Rate up to 5Gb/s


OWC Mercury AccelsiorM mSATA PCIe Controller
http://eshop.macsales.com/item/Other World Computing/PCIEACCELM



AHCI compliant (no drivers required)
Mac and PC-bootable
Sustained Reads (up to)380MB/s
Sustained Writes (up to) 380MB/s
3 Year OWC Warranty


I am going to try the Koutech IO-PESA238 as it appears to have a much higher throughput, supposedly it is designed to turn a normal HDD into a SSD hybrid, but someone on newegg said they booted into Windows on a mSATA using it so I am going to assume it'll work as a stand alone drive.
 

PigLover

Moderator
Jan 26, 2011
2,917
1,234
113
Nice idea on the mSATA. You could also see if a RevoDrive would fit. I haven't been a fan of direct-PCIe SSD solutions but it could be worth a look if it fits.
 

Toddh

Member
Jan 30, 2013
120
8
18
Dragon,

Not sure how far you are willing to go with this but the mb had 6 sata ports. I have not looked in the bios yet to see they are active but I am sure someone can chip in there.
 

Dragon

Banned
Feb 12, 2013
77
0
0
Nice idea on the mSATA. You could also see if a RevoDrive would fit. I haven't been a fan of direct-PCIe SSD solutions but it could be worth a look if it fits.
Yeah I don't like them either, they are way too over-priced and their so called super-duper IOPS often get surpassed by a new and much cheaper consumer SSD a year or two later. There doesn't seem to be enough space but it might fit if you bend the card...

Dragon,

Not sure how far you are willing to go with this but the mb had 6 sata ports. I have not looked in the bios yet to see they are active but I am sure someone can chip in there.
The 24 x 2.5" HDD slots version seems to use the same motherboard so I assume all 6 SATA ports are active, the problem is drawing power to the SSDs, I can see a bunch of electrical wires behind the PSU, if someone with an electrical engineering background can offer tips on how to attach extra SATA power cables to it without setting the thing on fire, I am willing to give it a go, all I need are 3 extra SSDs for 1 of the nodes, the other 3 can run without them.
 
Last edited:

Dragon

Banned
Feb 12, 2013
77
0
0
Does the LSI 9202-16e fit inside the node's riser card? From the pictures it looks like it might just fit if the plastic air duct is removed. (and shave off a few mm from the heatsink if necessary)



If it does I might just skip all the internal SSD hassle and connect the node to an external disk chassis and go all the way.
 
Last edited:

Scout255

New Member
Feb 12, 2013
58
0
0
Does the LSI 9202-16e fit inside the node's riser card? From the pictures it looks like it might just fit if the plastic air duct is removed. (and shave off a few mm from the heatsink if necessary)



If it does I might just skip all the internal SSD hassle and connect the node to an external disk chassis and go all the way.
This would be pretty a pretty impressive setup if that card fits. Would allow you to have a nice ZFS server as one of the nodes. The card is a low profile card, so hopefully it will fit.....
Anyone have a card to test?
 

dba

Moderator
Feb 20, 2012
1,478
181
63
San Francisco Bay Area, California, USA
Good news: I put a 9202 into each node of my C6100 two days ago. It is a *very* tight fit, but it works. When installed, the back end of the 9202 card overhangs part of the heatsink. It does not touch, but it does overhang. If you take a look at the very flexible air shroud, you will find a slit in it right next to the heatsink. I had no idea why they would do such a thing until I installed the 9202 card. The PCIe circuit board slips between the slit - that's how tight the fit.

My CPUs have not arrived, so I haven't booted yet. The first thing that I'll be doing is looking to see if the air shroud is blocking too much cooling air from reaching the 9202. By design, it steals some air from the PCIe card and sends it toward the Infiniband area of the motherboard. With the giant 9202 needing a significant amount of cooling, I might change that.


Does the LSI 9202-16e fit inside the node's riser card? From the pictures it looks like it might just fit if the plastic air duct is removed. (and shave off a few mm from the heatsink if necessary)



If it does I might just skip all the internal SSD hassle and connect the node to an external disk chassis and go all the way.
 
Last edited:

Patrick

Administrator
Staff member
Dec 21, 2010
11,805
4,760
113
Wow! That would be crazy. 2x QDR + a lot of RAM + 9202-16e in each node.
 

Dragon

Banned
Feb 12, 2013
77
0
0
Good news: I put a 9202 into each node of my C6100 two days ago. It is a *very* tight fit, but it works. If you take a look at the very flexible air shroud, you will find a slit in it right next to the heatsink. I had no idea why they would do such a thing until I installed the 9202 card. The PCIe circuit board slips between the slit - that's how tight the fit.

My CPUs have not arrived, so I haven't booted yet. The first thing that I'll be doing is looking to see if the air shrowd is blocking too much cooling air from reaching the 9202. By design, it steals some air from the PCIe card and sends it toward the Infiniband area of the motherboard. With the giant 9202 needing a significant amount of cooling, I might change that.
Well that just changed everything :cool:

One can build a whole rack of cheap ZFS with just 2 x C6100 plus a bunch of disks chassis.
It'll even work with 1 meter SFF-8644 cables if each C6100 is sandwiched with disk chassis's above and below
That is some data center porn right there :rolleyes:

Wow! That would be crazy. 2x QDR + a lot of RAM + 9202-16e in each node.
Hopefully we don't need 100dba fans to cool that electric heater...
 
Last edited:

Scout255

New Member
Feb 12, 2013
58
0
0
Good news: I put a 9202 into each node of my C6100 two days ago. It is a *very* tight fit, but it works. If you take a look at the very flexible air shroud, you will find a slit in it right next to the heatsink. I had no idea why they would do such a thing until I installed the 9202 card. The PCIe circuit board slips between the slit - that's how tight the fit.

My CPUs have not arrived, so I haven't booted yet. The first thing that I'll be doing is looking to see if the air shroud is blocking too much cooling air from reaching the 9202. By design, it steals some air from the PCIe card and sends it toward the Infiniband area of the motherboard. With the giant 9202 needing a significant amount of cooling, I might change that.
Wow, that is very impressive indeed! Thank you for trying out the fit.

One of these servers + the 9202 card + 10G infiniband + some SE3016 chassis' (Or the Norco + HP expander) would be very nice indeed.....
 

Biren78

Active Member
Jan 16, 2013
550
94
28
These are so cool. You can do all this with a normal server of course, but I can't find anything price wise that compares well if you needed 4. I guess expansion sucks. anyone making hadoops with these?
 

Toddh

Member
Jan 30, 2013
120
8
18
Given the price of the barebones units I was toying with an idea pretty much along the the same lines. Put in an LSI 9260-16i in a single node, pull the other 3 nodes out, and connect the 12x hds bays at the front. You could even mount a couple more hds inside the chassis from the vacated space of the removed nodes if you were so inclined. 12 bay SAN with redundant power.
 

PigLover

Moderator
Jan 26, 2011
2,917
1,234
113
Should work, very doable.

Be careful with the cabling at 4x SFF8087 will likely do a number on the airflow.

You will have to hard-cable to the HD backplaine so you'll lose the ability to just slide out the MB sled - you'll have to open the case and remove the disk cables first. But that's not too much of a hassle for a modd'd case like this.
 

dba

Moderator
Feb 20, 2012
1,478
181
63
San Francisco Bay Area, California, USA
For those toying with the idea of modding a C6100, here is how the disk subsystem is built and wired:


The path goes like this:
Disk <-> Backplane <--> breakout Cable <--> Midplane board <--> Interposer board <--> Cable <--> Motherboard

That's a fairly complicated path, but the good news is that you can
"intercept" at several different points using standard cables and
connectors.


In more detail:
•The disks are in a hot-swap backplane, either twelve x 3.5" or 24 x 2.5" disks. I'm only going to describe the twelve disk version here. The 24 disk uses a different backplane and midplane.
•The backplane gets power from both power supplies for redundancy. Voltages are standard, but the power connectors are not four-pin Molex.
•The "front side" disk connectors are single-ported SAS/SATA.
•The "back side" disk connectors (on the rear of the backplane) are normal SAS/SATA ports.
•A "forward" breakout cable combines sets of three SAS/SATA connectors to a SFF-8087 SAS connector. There are a total of four of these cables, one for each motherboard, three disks each.
•Each breakout cable plugs into the midplane board. The midplane board has two SFF-8087 connectors for each motherboard - eight total. Normally, only one is used per motherboard.
•The midplane board connects to the interposer board using a large connector. Within the interposer, the disk channels are converted to SATA connectors. While the midplane has two SFF-8087 connectors per motherboard for a total of eight disk channels, only six are converted to SATA - four from one connector and two from the other. The midplane is part of the chassis while the interposer is part of the slide-out motherboard cartridge, so this connection is made or broken when the motherboard chassis is inserted or removed.
•Cables connect the SATA connectors on the interposer board to the connectors on the motherboard. In a standard system these are SATA to SATA cables but if a SAS mezzanine card is in use, then one of them is a SFF-8087 to SATA "forward" breakout cable.

Rephrased:
Backplane: SATA disk in, SATA out.
Breakout cable: SATA in, SFF-8087 out
Midplane: SFF-8087 in, large custom connector out
Interposer: Large custom connector in, SATA out
Motherboard: SATA in

Each motherboard has six disk connectors on its interposer board and so can be wired to up to six disks without losing the ability to hot swap motherboards in and out of the chassis. If you need more than six disks for one motherboard, you can make it work, but you'll have a frankenstein on your hands when you are done.
 
Last edited: