Dell C6100 XS23-TY3 2U 4-Node (8 CPU) Cloud Server

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

legen

Active Member
Mar 6, 2013
213
39
28
Sweden
Two questions about the C6100 platform in this thread:

1) I saw on the first post that they are configured for up to three drives per node. Are there any other possibilities for changing that configuration? Specifically, one node with all 12 drives?
and,
2) Are the nodes interconnected in any way that can be utilized from a Linux environment? And if so, at what speed does this interconnect operate?

I'm specifically thinking of putting all drives on one host, doing a RAID-10, then sharing that disk to the other nodes. Another option would be to RAID-0 or 1 the 3 drives within each node, then do the opposite between the nodes. Both of these depend on some sort of connection between the nodes.

I flipped through the pages of this thread, and googled a bit, but wasn't able to find the answers to these. If they're here and I missed it, I apologize. Could be that I'm using the wrong terms to search.

Any info would be appreciated!
I cant answer all but:

One thing to note, the manual says that you can't run these things in a full configuration with all 4 nodes and the 1100W power supplies (full disk, memory, CPU). Not really sure what that means to those wanting to run full disk stacks. In the HPC model you would run them disk less or one set so never fully loading them.
This might be worth knowing if you plan on populating all bays and running all nodes.

Each node has the possibility to easily connect 6x3.5'' drives (or 2 drive bays). Check this image i took with my phone, http://img46.imageshack.us/img46/6668/201303281508151.jpg

Edit: in the picture you see one node with 2 drive bays connected to it and the node to the right has no drive bays connected to it.
 
Last edited:

dba

Moderator
Feb 20, 2012
1,477
184
63
San Francisco Bay Area, California, USA
For disk wiring questions, start with post number 80 in this thread. In short: wring for up to six drives per node can be done easily and cleanly using readily available cables. More than that requires some very sloppy cable hacks. Of course you can just wait three years until the c6220 lands on eBay - I read that the disk backplane in these is expander-based and programmable to re-allocate drives any way you want.

There is no special interconnect between nodes - except for good old Ethernet, SAS, Infiniband, or anything else you can stuff into the Mezzabnine and half-height PCIe slots.

Two questions about the C6100 platform in this thread:

1) I saw on the first post that they are configured for up to three drives per node. Are there any other possibilities for changing that configuration? Specifically, one node with all 12 drives?
and,
2) Are the nodes interconnected in any way that can be utilized from a Linux environment? And if so, at what speed does this interconnect operate?

I'm specifically thinking of putting all drives on one host, doing a RAID-10, then sharing that disk to the other nodes. Another option would be to RAID-0 or 1 the 3 drives within each node, then do the opposite between the nodes. Both of these depend on some sort of connection between the nodes.

I flipped through the pages of this thread, and googled a bit, but wasn't able to find the answers to these. If they're here and I missed it, I apologize. Could be that I'm using the wrong terms to search.

Any info would be appreciated!
 

darkdub

New Member
Mar 28, 2013
5
0
1
Thanks for the information thus far!

Thought of another scenario given what has been said. If the disk configuration remained unmodified (and assuming that the power supplies can handle the load), would it be possible to connect one additional drive (beyond the 12 on the hot-swap slots) using the on-board sata slots for each node? That is, making the total number of drives 16.

And... if the on-board sata slots could be used in that manner, is there a way using in-chassis cables / supplies, to power that additional drive (again, assuming that the existing power supplies can support the load)?
 

devioustrap

New Member
Mar 6, 2013
8
0
1
Thanks for the information thus far!

Thought of another scenario given what has been said. If the disk configuration remained unmodified (and assuming that the power supplies can handle the load), would it be possible to connect one additional drive (beyond the 12 on the hot-swap slots) using the on-board sata slots for each node? That is, making the total number of drives 16.

And... if the on-board sata slots could be used in that manner, is there a way using in-chassis cables / supplies, to power that additional drive (again, assuming that the existing power supplies can support the load)?
The board physically has connectors for an extra drive, yes. Getting power to it would be harder.

Are you thinking of trying to hide an SSD somewhere? The (3.5") chassis only has room for 12 drives.
 

darkdub

New Member
Mar 28, 2013
5
0
1
The board physically has connectors for an extra drive, yes. Getting power to it would be harder.

Are you thinking of trying to hide an SSD somewhere? The (3.5") chassis only has room for 12 drives.
That's it exactly. Thinking of fabing a bracket to suspend a 2.5" SSD (or two) in each node. Just looking at feasibility at this point.

Given that the connectivity is possible, I'd really be interested in any ways to power the additional drives.
 

jcdmacleod

New Member
Mar 27, 2013
5
0
0
I looked briefly a few weeks ago - no official support for OMSA (OpenManage Server Administrator) in the entire c-series line.
I thought this may still be the case. Anyone had any luck with OpenIPMI? - I am looking to obtain similar results for monitoring that I get via SNMP when using OMSA - specifically Temp/Fan speed data. As the fans are run from a backplane vs the motherboards themselves this may not even work, or does the MGMT port provide anything that can be remotely monitored? This isn't a DRAC in the sense of a DRAC. I have lots of experience with Dell, but none on the C series.

I'm trying to work through this prior to a purchase, I could bite the bullet and figure it out but I really do not have the time for playing at the moment, sadly.

Thanks in advance,

John
 

jcdmacleod

New Member
Mar 27, 2013
5
0
0
There is no (usual) way to power a SATADOM directly of the MB is there? If not, is there a USB header on the MB? If so, thats a source of power for a SATADOM. It would be an alternative for an SSD although slightly more costly and at a smaller size, but I guess it depends on what you want to do with it.
 

jcdmacleod

New Member
Mar 27, 2013
5
0
0
To those of you who are looking for alternatives for boot drives that wish to keep the bays open for more healthy use - why not boot a stateless system? We do this a lot at $work, we boot standard Dell workstations/desktops stateless in a lab for testing, most of the R series for baremetal use and even VMware instances. We use Perceus which is widely used in the HPC field - although we do not do anything in that field, we simply boot stateless as saving 2 instances of spinning rust per server (as boot drives) at the number of servers we run ends up with a healthy power savings plus the operational benefits. More details about perceus here - Home. This is how I will boot mine (when they arrive) and they will either be completely diskless or using hybrid mode, with only data being kept on local drives. Perceus and Chef = low power, low maintenance...

YMMV, but if you need anymore info, just yell.
 

dba

Moderator
Feb 20, 2012
1,477
184
63
San Francisco Bay Area, California, USA
Some c6100 I/O testing:

I have a c6100 node with dual L5520s and 96GB RAM. The node has the LSISAS2008 SAS/SATA mezzanine card and is in a 24-bay chassis. I hooked up five old OCZ Vertex3 drives in addition to a non-SSD boot drive and ran some IOMeter tests to look for IO problems if any.

The good news is that the setup performs as expected. Maximum throughput for 1MB random reads over a 20GB data set was 2,601MB/S - right in line with the best that the LSISAS2008 can do in an x8 slot on any platform. Maximum 4kb IOPS was 304,804, likely held back just slightly by the five old drives. CPU utilization was very low - 1.3% in the throughput test and 11.2% in the IOPS test.

If anyone has the LSI1068e version of the card, I'd love to know how it performs compared to the newer 2008 version that I tested - the 1068e cards are far more available than the 2008 cards
 

dba

Moderator
Feb 20, 2012
1,477
184
63
San Francisco Bay Area, California, USA
I like the SATADOM idea, so I looked for a USB port or header on the motherboard - there isn't one.

There is no (usual) way to power a SATADOM directly of the MB is there? If not, is there a USB header on the MB? If so, thats a source of power for a SATADOM. It would be an alternative for an SSD although slightly more costly and at a smaller size, but I guess it depends on what you want to do with it.
 

gigatexal

I'm here to learn
Nov 25, 2012
2,913
607
113
Portland, Oregon
alexandarnarayan.com
To those of you who are looking for alternatives for boot drives that wish to keep the bays open for more healthy use - why not boot a stateless system? We do this a lot at $work, we boot standard Dell workstations/desktops stateless in a lab for testing, most of the R series for baremetal use and even VMware instances. We use Perceus which is widely used in the HPC field - although we do not do anything in that field, we simply boot stateless as saving 2 instances of spinning rust per server (as boot drives) at the number of servers we run ends up with a healthy power savings plus the operational benefits. More details about perceus here - Home. This is how I will boot mine (when they arrive) and they will either be completely diskless or using hybrid mode, with only data being kept on local drives. Perceus and Chef = low power, low maintenance...

YMMV, but if you need anymore info, just yell.
like booting to remote storage using iscsi?

im looking to do something similar i just dont know how to boot an install media and have the installer see the remote storage as something to install to
 

seang86s

Member
Feb 19, 2013
164
16
18
Some c6100 I/O testing:

I have a c6100 node with dual L5520s and 96GB RAM. The node has the LSISAS2008 SAS/SATA mezzanine card and is in a 24-bay chassis. I hooked up five old OCZ Vertex3 drives in addition to a non-SSD boot drive and ran some IOMeter tests to look for IO problems if any.

The good news is that the setup performs as expected. Maximum throughput for 1MB random reads over a 20GB data set was 2,601MB/S - right in line with the best that the LSISAS2008 can do in an x8 slot on any platform. Maximum 4kb IOPS was 304,804, likely held back just slightly by the five old drives. CPU utilization was very low - 1.3% in the throughput test and 11.2% in the IOPS test.

If anyone has the LSI1068e version of the card, I'd love to know how it performs compared to the newer 2008 version that I tested - the 1068e cards are far more available than the 2008 cards
What's the Dell part number of the SAS2008 mezzanine card?
 

jcdmacleod

New Member
Mar 27, 2013
5
0
0
like booting to remote storage using iscsi?

im looking to do something similar i just dont know how to boot an install media and have the installer see the remote storage as something to install to
No, that is the interesting thing about this. Perceus uses what it calls a capsule to boot an OS, the drives are all ramdisks, so you do need more RAM than your intended application, but nothing crazy. The capsule is pre-built with the packages required for the intended use of the server. At boot the Perceus master assigns an IP and PXE boots a small bootstrap kernel which if left then requests the capsule image from the master over either http or nfs - it is based on MAC. In a pure stateless environment anytime a reboot is made, all config is lost, however we overcome that by using chef to keep the config details of each host, on reboot, the clean image is run, chef configs the server as it should and everything works a treat. In hybrid mode local storage is used in combination with the ramdisks so you can keep persistent data on the local storage.

There are many scale-out users that do this very thing. No expensive iscsi or storage requirements. It is much easier for scale-out applications than anything else, but I use it at home for testing purposes beyond what we use at work and it functions well.

The I/O is low on the master as it only pulls the image at boot, otherwise there is no access to the master.
 

swflmarco

Member
Mar 28, 2013
39
0
6
Fort Myers, FL USA
Has anyone had any issues with the 1.70 BIOS?
My hosts are all ESXi 5.0 with single SSD onboard. Out of my 8 nodes, I received 3 different firmware versions... will probably try to upgrade firmware remotely over IPMI in the next few days.
1.05, 1.47, and 1.62
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,511
5,792
113
I just deployed a couple of C6100's, 4 node, 2x L5520's, 48GB RAM + Quad Port NIC. So far liking them.

Very cool. Which quad port NICs and are they 3.5" or 2.5" chassis?

BTW CL = Cloud lab by chance? :)