Dell C6100 XS23-TY3 2U 4-Node (8 CPU) Cloud Server

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Patrick

Administrator
Staff member
Dec 21, 2010
12,511
5,792
113
Welcom ifccnb!

Will try answering a few questions.

1. Yes on Windows 7. I actually have two nodes running x64 Windows 7.
2. Easier. Log in via IPMI and remote mount the OS image and install via remote KVM. Give it power and network setup and you can administer remotely.
3. Very true on benchmarks. I'm thinking of doing a dual L5520, dual L5530 v. dual L5638 test to see if the price differential is worth it (for AES maybe makes sense)
4. Some of the 6-core 5600's are not that bad. L5520's are very inexpensive though. We have a CPU/ memory thread for these: http://forums.servethehome.com/grea...-xeon-lga1366-cloud-inexpensively-thread.html
5. Not enough time to tell yet. One major advantage is that for under $300 you can basically get a near complete spare kit. Folks here have lots of spares which may help you one day.
 

Dragon

Banned
Feb 12, 2013
77
0
0
Hello server folks,

I am into graphic design and use software like 3ds max, adobe suite and the like. I have been looking into a solution for offloading rendering tasks. I was thinking of putting a project like this together for my home office. ---> Link My thoughts were along the lines of 4, bang for the buck, intel "ivy bridge" nodes. With only the essential mb, cpu, ram, psu and hd. Price per node is between $500 - $600 with an additional $150 or so for extras.

I must say that the Dell C6100 XS23-TY3 has peaked my curiosity. 4 nodes 8 cpus 32 cores 96gb of ddr3 @ $900+shipping+the cost of 4 hds... on the surface there seems to be a good cost/performance ratio! All packed into one space saving, albeit loud, unit.

3. cpu performance - At stock speeds how do 2x L5520's compare to, lets say an i7-3770K (1155 ivy bridge) I have seen some bench mark scores and it looks close but I'm not sure how reliable the sources are or if they even apply to my situation.
Hi, welcome, when in doubt, I recommend going to spec.org for specific performance differences between cpus.

Search Published SPEC Results

SPECint2006 scores (integer performance):
L5520 (2cpu): 27
3770k: 53.2
1230v2: 54
1270v2: 56.8

SPECfp2006 scores (floating point performance):
L5520 (2cpu): 31.5
3770k: 66.2
1230v2: 69.2
1270v2: 71

SPECint_rate2006 scores (integer throughput):
L5520 (2cpu): 200
3770k: 189
1230v2: 192
1270v2: 199

SPECfp_rate2006 scores (floating point throughput):
L5520 (2cpu): 158
3770k: 130
1230v2: 136
1270v2: 140

What this translates to is that, for simple tasks the 2xL5520 takes twice as long than a 3770k to finish, but for threaded tasks that you that run continuously for minutes/hours, the 2xL5520 will handle more work than a single 3770k within the same time frame.

The cost-performance ratio of the C6100 is exceptional, BUT if your work is graphics/video related, you might want to pay attention to video compression/encode/decode performance, while the L5520 is a bit faster than 3770k multi threaded, for H.264/AVC the 3770k is still twice as fast than 2 x L5520 combined.

CINT2006 Result: Dell Inc. PowerEdge R510 (Intel Xeon L5520, 2.26 GHz) (8 cores, 2 chips)
L5520 464.h264ref base: 814 seconds, peak: 749 seconds

CINT2006 Result: Intel Corporation Intel DH77KC motherboard (Intel Core i7-3770K)
3770K 464.h264ref base: 341 seconds, peak: 305 seconds

(464.h264ref benchmark explained: 464.h264ref: SPEC CPU2006 Benchmark Description)

If you're running a web service or crunching data in the background, the C6100 is a great choice, but I am not sure about using the C6100 for graphics design, time is money and you might not want to sit twice as long to wait for the same graphic/video to show up on screen.

To find the exact answer you're looking for you need to find out what encoding mechanism your software is using and then compare the related benchmark.

Also I have to say graphics/video is not my field and the speed problem may be solved by a graphics card, but I know for a fact Photoshop doesn't multi-thread well, especially when you're running custom scripts on them.

This is not a definitive answer, I am just sharing what I know, hope this helps. :cool:
 
Last edited:

PigLover

Moderator
Jan 26, 2011
3,184
1,545
113
I can add to that some personal experience. For the last couple of years I've used a dual X5550 with 24gb as a video editing/rendering workstation. For a bunch of reasons I recently 'downgraded' to a smaller/quieter system built on a single e3-1245v2 (essentially the same chip as a non-k i7-3770 in Xeon skin). I thought this would be ok because the volume of work I had was decreasing rapidly.

I was surprised to find out that the e3-1245v2 kicked the dual x5550 in the ass...not just a little bit faster for renders...25% or more on real workflows. Not such a downgrade after all.

I'm still excited about the c6100 because the cost/thread and large memory options are better for a vm/cloud lab, but for real work with video I'd pass.
 

dba

Moderator
Feb 20, 2012
1,477
184
63
San Francisco Bay Area, California, USA
My WD IcePack adapter arrived today. It works very well in the C6100, provided that you get one that includes screws. The aluminum frame is so thick that standard drive screws do not work. Fortunately, mine arrived with four very long screws taped to the sled. If yours does not, then it's a doorstop.

Solid is an understatement. these are also made as heatsinks
 
Last edited by a moderator:

ifccnb

New Member
Mar 9, 2013
2
0
0
Thank you for the information everyone. That clears a few things up for me. I appreciate you taking the time.

Manny
 

devioustrap

New Member
Mar 6, 2013
8
0
1
Does anybody know whether the PCI slot on the nodes has a half-height or a full size bracket? I'm finding conflicting info online.
 

Jeggs101

Well-Known Member
Dec 29, 2010
1,529
241
63
Does anybody know whether the PCI slot on the nodes has a half-height or a full size bracket? I'm finding conflicting info online.
Welcome to sth! These are LP/ half height. Where did you see otherwise? 1st post in this thread has pics if you want visual confirmation
 

legen

Active Member
Mar 6, 2013
213
39
28
Sweden
Hi!

I got two questions before i buy one of these.

1. Can one use a low-profile raid-card and connect it to more than one 3x3.5'' hotswap enclosure? My zfs VM needs 5 drive bays currently.

2. Does this unit support VT-D in ESXi? The xeon generation should but can anyone confirm that it works? I would like to do passthrough of my raid-controller to a Solaris VM for ZFS.

Edit: I do not mind doing some ghetto-wiring for point 1 above. I will modify the case anyway to remove the 20000 RPM fans.

Thanks for a great forum!
 
Last edited:

PigLover

Moderator
Jan 26, 2011
3,184
1,545
113
Yes. You could wire up to 6 drives and stay 'on the sled' (preserving the ability to slide it out easily). You could also ghetto-mod cable up to all 12 drive slots to a raid card on one sled, though you'd need fairly long cables. Pretty sure a 1m fwd breakout cable would do it. You might be able to use a .75m cable but it would b tight with all the routing around things inside the case.

And yes, the MB on each sled does support VT-d. It is disabled in the default bios settings, so don't forget to turn it on before you try to use it.
 

legen

Active Member
Mar 6, 2013
213
39
28
Sweden
Yes. You could wire up to 6 drives and stay 'on the sled' (preserving the ability to slide it out easily). You could also ghetto-mod cable up to all 12 drive slots to a raid card on one sled, though you'd need fairly long cables. Pretty sure a 1m fwd breakout cable would do it. You might be able to use a .75m cable but it would b tight with all the routing around things inside the case.

And yes, the MB on each sled does support VT-d. It is disabled in the default bios settings, so don't forget to turn it on before you try to use it.
Thanks for the info! I have been googling all over the place for this information. Looks like i will order one of these very soon :)
 

dba

Moderator
Feb 20, 2012
1,477
184
63
San Francisco Bay Area, California, USA
To add a bit more information: The C6100 disk backplane has 12 or 24 SATA ports depending on which version you purchased. Power for the ports is turned on and off by the motherboards, in groups of three of six drives. If you re-wire the disks, for example allocating four disks to a node instead of three in a twelve-disk chassis, remember that only three disks will power up with the motherboard while the other will only power up when another motherboard is powered.

It may be possible to re-wire the backplane to change this, of course, though it isn't as simple as swapping Molex plugs as in a Supermicro backplane. The c6100 uses proprietary plugs.

Thanks for the info! I have been googling all over the place for this information. Looks like i will order one of these very soon :)
 

Jeggs101

Well-Known Member
Dec 29, 2010
1,529
241
63
One project I am thinking of is doing a diskless chassis with one of them.
 

Scout255

Member
Feb 12, 2013
58
0
6
To add a bit more information: The C6100 disk backplane has 12 or 24 SATA ports depending on which version you purchased. Power for the ports is turned on and off by the motherboards, in groups of three of six drives. If you re-wire the disks, for example allocating four disks to a node instead of three in a twelve-disk chassis, remember that only three disks will power up with the motherboard while the other will only power up when another motherboard is powered.

It may be possible to re-wire the backplane to change this, of course, though it isn't as simple as swapping Molex plugs as in a Supermicro backplane. The c6100 uses proprietary plugs.
Hmmm, thanks for the info.... Guess it would be best to just use an external disk enclosure then.
 

legen

Active Member
Mar 6, 2013
213
39
28
Sweden
To add a bit more information: The C6100 disk backplane has 12 or 24 SATA ports depending on which version you purchased. Power for the ports is turned on and off by the motherboards, in groups of three of six drives. If you re-wire the disks, for example allocating four disks to a node instead of three in a twelve-disk chassis, remember that only three disks will power up with the motherboard while the other will only power up when another motherboard is powered.

It may be possible to re-wire the backplane to change this, of course, though it isn't as simple as swapping Molex plugs as in a Supermicro backplane. The c6100 uses proprietary plugs.
Interesting, well this is not a show stopper for me. When i get my unit I will investigate on this further.
 

PigLover

Moderator
Jan 26, 2011
3,184
1,545
113
My WD IcePack adapter arrived today. It works very well in the C6100, provided that you get one that includes screws. The aluminum frame is so thick that standard drive screws do not work. Fortunately, mine arrived with four very long screws taped to the sled. If yours does not, then it's a doorstop.
I received a couple of IcePacks from this seller. His listing is still active and it looks like he has lots of them (it shows 101 sold / more than 10 available). It arrived with all four screws taped into the holes (including the one under the warranty sticker, somehow loosened so carefully it might have passed a warranty inspection).

One note: the original WD screws use the worlds smallest TorX driver(***)! I had to go to three different stores to find one that fits (found it at OSH if your in CA).

(***) OK - its not that small. Its T-9. But Home Depot was sold out...
 

config

New Member
Mar 16, 2013
3
0
1
This MB dual 82576 SR-IOV unavailable.

BIOS
VT-d enable
SR-IOV enable

ESXi dmesg


2013-03-17T11:15:04.130Z cpu11:4612)<6>igb 0000:01:00.0: eth0: PBA No: 82576B-001
2013-03-17T11:15:04.130Z cpu11:4612)<6>igb 0000:01:00.0: Using MSI-X interrupts. 1 rx queue(s), 1 tx queue(s)
2013-03-17T11:15:04.130Z cpu11:4612)PCI: driver igb claimed device 0000:01:00.0
2013-03-17T11:15:04.131Z cpu11:4612)<6>igb: : igb_validate_option: max_vfs - SR-IOV VF devices set to 7
2013-03-17T11:15:04.131Z cpu11:4612)<4>igb 0000:01:00.1: Failed to initialize SR-IOV virtualization
 

Toddh

Member
Jan 30, 2013
122
10
18
dba are you sure about the backplane info and the mb powering the disks? I was messing around this weekend using the C6100 as a SAN. I threw in an LSI 8888ELP and one node and I currently have 7 hds connected and running. The 8th didn't connect but I have not checked the cabling yet.

What I did was move the SFF-8087 sata connector from Nodes 1 & 2 to the available connectors on Nodes 3 & 4, so all 12 bays connect to Node 3 & 4. I installed Node 4 and an empty tray in Node 3( mb removed) This allows me to connect the 8 fan out cables from the 8888ELP to the small interface board on the sled which has 6 sata connectors per node. As I mentioned 7 of the 8 hds are working with just 1 Node running. Nodes 1 and 2 are empty at this time.


To add a bit more information: The C6100 disk backplane has 12 or 24 SATA ports depending on which version you purchased. Power for the ports is turned on and off by the motherboards, in groups of three of six drives. If you re-wire the disks, for example allocating four disks to a node instead of three in a twelve-disk chassis, remember that only three disks will power up with the motherboard while the other will only power up when another motherboard is powered.

It may be possible to re-wire the backplane to change this, of course, though it isn't as simple as swapping Molex plugs as in a Supermicro backplane. The c6100 uses proprietary plugs.
This was all temp to test if it would work. I am going to order a Intel 24 Port SAS expander to see if I can get all 12 running. One of the challenges will be to get power back there to run run the SAS expander.




.
 
Last edited:

dba

Moderator
Feb 20, 2012
1,477
184
63
San Francisco Bay Area, California, USA
When you did your test, were all four motherboards powered on (green lights on the front bezel) or just the two? What happens if you pull the other two motherboards out from their slots?

It would be good news if it works. In my testing I didn't get to the point of hooking up cables. I just noted that drives only powered on when the corresponding motherboard was powered.

dba are you sure about the backplane info and the mb powering the disks? I was messing around this weekend using the C6100 as a SAN. I threw in an LSI 8888ELP and one node and I currently have 7 hds connected and running. The 8th didn't connect but I have not checked the cabling yet.

What I did was move the SFF-8087 sata connector from Nodes 1 & 2 to the available connectors on Nodes 3 & 4, so all 12 bays connect to Node 3 & 4. I installed Node 4 and and empty tray in Node 3( mb removed) This allows me to connect the 8 fan out cables from the 8888ELP to the small interface board on the sled which has 6 sata connectors per node. As I mentioned 7 of the 8 hds are working with just 1 Node running. Nodes 1 and 2 are empty at this time.




This was all temp to test if it would work. I am going to order a Intel 24 Port SAS expander to see if I can get all 12 running. One of the challenges will be to get power back there to run run the SAS expander.




.
 

Toddh

Member
Jan 30, 2013
122
10
18
The only node that had a mb was Node 4, the server I was using to run the SAN. Node 3 had the sled inserted minus the mb. Nodes 1 and 2 were empty, no sleds installed what so ever.

Only the hd LEDs for Node 4 lite up. I was worried at first thinking it wasn't going to work - I had my SFF-8087 cables setup incorrectly and as a result had my hds in the wrong bays. Once I got my hds in the correct bays they showed up fine. But no power LED for any hds other than Node 4.


.
 

dba

Moderator
Feb 20, 2012
1,477
184
63
San Francisco Bay Area, California, USA
I consider that great news - we can re-wire the disks.

The only node that had a mb was Node 4, the server I was using to run the SAN. Node 3 had the sled inserted minus the mb. Nodes 1 and 2 were empty, no sleds installed what so ever.

Only the hd LEDs for Node 4 lite up. I was worried at first thinking it wasn't going to work - I had my SFF-8087 cables setup incorrectly and as a result had my hds in the wrong bays. Once I got my hds in the correct bays they showed up fine. But no power LED for any hds other than Node 4.


.