Dell C6100 XS23-TY3 2U 4-Node (8 CPU) Cloud Server

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

elements

New Member
May 22, 2013
2
0
0
I bought from lextecny2012 / Deep Discount Servers (They are the same). They shipped quickly and packaged nicely.
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,519
5,821
113
Have purchased from other providers that I think are a bit larger in scale with many more products. Have received spam comments on the blog from DDS. Akismet stopped them so no big deal.
 

knknif

New Member
May 27, 2013
24
0
0
Does there exist any other piece of equipment that provides a similar level of hardware at the same price point per gig/ghz? I can't imagine there is, but I wouldn't mind spending half as much and getting half as much. I feel pretty confident though that if i spend half as much, i'd only get about a quarter as much.
 

cafcwest

Member
Feb 15, 2013
136
14
18
Richmond, VA
Depending on your use case of course, I'd consider a Dell PowerEdge C1100. On eBay, with 72GB of RAM, they are running about $450. So if you don't need four servers, and only want to spend half as much money, you'd get a quarter of a C6100 (though with a bit more RAM for your 'node')
 

s0lid

Active Member
Feb 25, 2013
259
35
28
Tampere, Finland
used freedos usb stick for updating bios and bmc firmware.

For BMC:
If you're using the self-extracting disk image from dells site.
/SOCFlash/FLASH8.BAT works just fine

For bios:
afudos /i*newromnamehere* /pbnc /n

like
afudos /i6100v170.ROM /pbnc /n

Works like charm. No idea about fan controller, i got the older model which doesn't have updates avail.
 

gardar

Member
Nov 15, 2012
46
29
18
So how much ram do these take?

Are we talking about maximum configuration 12*16gb per node?
 

PigLover

Moderator
Jan 26, 2011
3,186
1,545
113
So how much ram do these take?

Are we talking about maximum configuration 12*16gb per node?
You have to populate both CPUs to use all 12 DIMM slots in a node. With only one CPU you can only use 6 DIMMs.

Max is 12x 8gb per node when using X55xx or L55xx CPUs.

Max is 12x 16gb per node when using X56xx or L56xx CPUs.

If you use more than 3 DIMMs/CPU and the DIMMs are quad-rank (most RDIMMs are) then memory speed will be reduced to DDR3-800. Some people have reported that using Dual-Rank RDIMMs avoids this speed downgrade, but dual-rank are hard to find at higher capacities. In most cases, for most workloads, there is little impact from this because the lower CAS latency of most DIMMS offsets the lower speed for random accesses. But for synthetic benchmarks and some memory-pump workloads (gaming, rendering, ray-tracing, etc.) the lower speed may be noticeable.
 
Last edited:

johnduhart

New Member
Mar 14, 2013
15
0
1
Found a very interesting set of videos from what looks like a dell technician's training session on the C6100.

YouTube

They go over parts of the C6100 and some of the issues they've encountered over it's lifespan.
 

cjd

New Member
May 30, 2013
1
0
0
msata

Been digging around in the past 2 days looking for ways to fit some extra SSDs inside each node in the Dell C6100, so far there seems to be 2 viable solutions.

Although these cards only allow you to add 1 mSATA SSD inside each node (I was looking for a way to add 3 SSDs), at least now your 3 mechanical disks under ZFS won't choke when your DB write heavily into it, and you can now sleep better at night knowing a 3-way ZFS mirror is better than a 2-way.


Koutech IO-PESA238 PCI-Express 2.0 x1 Low Profile Ready SATA III (6.0Gb/s) Dual Channel Controller Card with HybridDrive Support (1 x Int+1 x mSATA)
NeweggBusiness - Koutech IO-PESA238 PCI-Express 2.0 x1 Low Profile Ready SATA III (6.0Gb/s) Dual Channel Controller Card with HybridDrive Support (1 x Int+1 x mSATA)



Two 6.0Gbps SATA III channels
One (1) internal 7-pin SATA III connector
One (1) internal mSATA (mini PCIe) socket
Supports all mSATA (mini-SATA) SSD devices at 1.5/3.0/6.0 Gbps
Compliant with 5Gb/s PCI Express 2.0 specifications
Transfer Rate up to 5Gb/s


OWC Mercury AccelsiorM mSATA PCIe Controller
http://eshop.macsales.com/item/Other World Computing/PCIEACCELM



AHCI compliant (no drivers required)
Mac and PC-bootable
Sustained Reads (up to)380MB/s
Sustained Writes (up to) 380MB/s
3 Year OWC Warranty


I am going to try the Koutech IO-PESA238 as it appears to have a much higher throughput, supposedly it is designed to turn a normal HDD into a SSD hybrid, but someone on newegg said they booted into Windows on a mSATA using it so I am going to assume it'll work as a stand alone drive.
Did you try this? I'm wondering how it worked out for you.
 

PigLover

Moderator
Jan 26, 2011
3,186
1,545
113
i am using the Koutech with a 120Gb Mushkin SSD on 3 sleds of my C6100. I wired all of the hard-drive slots to a single sled and I needed something to use for boot on the other 3. This fit the application nicely.

Functionally, they work very well. Have successfully booted several Linux distros, Windows 7 & Windows 8 with no driver issues. Tested multi-boot using both Windows boot loader and Grub2 as primary boot loader. No worries.

I did bench them and they do not provide anywhere near SATA-III speeds. Even though the Mushkin mSATA SSD performs quite well in a "real" mSATA application they are more like SATA-II SSD speeds on this adapter. That's not an issue for me 'cuz speed on the boot disk is almost meaningless in my use cases or tests, but it was somewhat disappointing.

I did notice that you can't boot XCP (Xen Cloud platform) on it due to a driver issue in the Dom0 OS. Otherwise, it has been perfect. I didn't test booting ESXi but I would imagine that may not work, again due to driver issues.

I think if I were going to do it again I would use something like this for mSATA: Amazon.com: MP3S (mSATA to SATA Adapter for PCIe Slot): Computers & Accessories. These just use the PCIe slot to power the mSATA card. Then you use a short SATA cable to wire it over to an on-board SATA port. You'll avoid any driver issues as every OS in the world understands Intel on-board SATA. You won't get SATA-III speeds but in fact the Koutech card is "SATA-III in name only" anyway.
 
Last edited:

Patrick

Administrator
Staff member
Dec 21, 2010
12,519
5,821
113
I think if I were going to do it again I would use something like this for mSATA: Amazon.com: MP3S (mSATA to SATA Adapter for PCIe Slot): Computers & Accessories. These just use the PCIe slot to power the mSATA card. Then you use a short SATA cable to wire it over to an on-board SATA port. You'll avoid any driver issues as every OS in the world understands Intel on-board SATA. You won't get SATA-III speeds but in fact the Koutech card is "SATA-III in name only" anyway.
That looks very reasonable. I may get 1-2 to play around with.
 

knknif

New Member
May 27, 2013
24
0
0
Quick question.

If I were to run this C6100 24/7, how much might that cost in electricity per month?

Granted this answer is going to vary depending on your location and the local costs of whatever. However, if you were to give me a price and what city you live in, I should be able to get a pretty good idea of what it might run my bill up to should I leave it on all the time.
 

Rain

Active Member
May 13, 2013
276
124
43
Quick question.

If I were to run this C6100 24/7, how much might that cost in electricity per month?

Granted this answer is going to vary depending on your location and the local costs of whatever. However, if you were to give me a price and what city you live in, I should be able to get a pretty good idea of what it might run my bill up to should I leave it on all the time.
Assume a 500W (about 50% of the power supply capacity) load, 24/7.

0.5 kW * (30 days * 24 hours) = 360 kWh of usage per month

360 kWh * $0.12 = ~$47 per month (Where $0.12 is the average $/kWh in the US)

If you live in the US, you can use this map to get a closer $/kWh for your state.
 

knknif

New Member
May 27, 2013
24
0
0
Thats cool, so about $55 for me in San Diego.

Under what circumstances might that 50% capacity hold true? When might it be lower/higher? Would it strictly depend on the hardware in the server? More hard drives, RAM, etc? Or would it be if the server is actually doing CPU intensive tasks?
 

gardar

Member
Nov 15, 2012
46
29
18
You have to populate both CPUs to use all 12 DIMM slots in a node. With only one CPU you can only use 6 DIMMs.

Max is 12x 8gb per node when using X55xx or L55xx CPUs.

Max is 12x 16gb per node when using X56xx or L56xx CPUs.

If you use more than 3 DIMMs/CPU and the DIMMs are quad-rank (most RDIMMs are) then memory speed will be reduced to DDR3-800. Some people have reported that using Dual-Rank RDIMMs avoids this speed downgrade, but dual-rank are hard to find at higher capacities. In most cases, for most workloads, there is little impact from this because the lower CAS latency of most DIMMS offsets the lower speed for random accesses. But for synthetic benchmarks and some memory-pump workloads (gaming, rendering, ray-tracing, etc.) the lower speed may be noticeable.

Thanks for an excellent reply, looks like I have to pick up one or a dozen of those for a kvm/openstack cluster.

What kind of disks are you guys running on your setups? I can imagine the 7200rpm satas would be a big bottleneck.
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,519
5,821
113
Thanks for an excellent reply, looks like I have to pick up one or a dozen of those for a kvm/openstack cluster.

What kind of disks are you guys running on your setups? I can imagine the 7200rpm satas would be a big bottleneck.
Generally I'm just using one SSD per node. Backup nodes get a 3TB WD Red just to archive off of. Also remember that the SATA ports are driven off of the ICH10R so you get 6 SATA II ports with a maximum of 650-660MB/s. You can use a PCIe controller for more disk performance though.
 

Fzdog2

Member
Sep 21, 2012
92
14
8
Should the ICH10R chipset be able to detect and use SAS drives? I'm having a weird issue where I can slot a SATA drive in and it detects just fine, but when I swap it out to a 10k SAS drive I get nothing. I can move the SAS drives over to a LSI 9211i and it has no problems, just the on board ports don't detect the drives.
 

dba

Moderator
Feb 20, 2012
1,477
184
63
San Francisco Bay Area, California, USA
You are correct: The onboard ports are SATA ports, not SAS. It's a controller issue, not a cabling issue; with an internal PCIe or mezzanine SAS card, the disk backplane and wiring will do SAS.

Should the ICH10R chipset be able to detect and use SAS drives? I'm having a weird issue where I can slot a SATA drive in and it detects just fine, but when I swap it out to a 10k SAS drive I get nothing. I can move the SAS drives over to a LSI 9211i and it has no problems, just the on board ports don't detect the drives.