Dell c6100 and c6220 Killer - Intel quad-node 2U Xeon E5 Virtualization Server

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

PigLover

Moderator
Jan 26, 2011
3,186
1,545
113
Used RAM doesn't seem that bad. $55 to $75 for 8GB sticks, 1333 and 1600 respectively.
No. Not horrible i suppose. But when you want to install 4 pieces/CPU in order to get 4-way interleave and you have 8 CPUs...32 pieces @ $75 or so starts to add up fast.
 

PigLover

Moderator
Jan 26, 2011
3,186
1,545
113
dba, PigLover, b3nz0n8 or anyone else needing to fix bent pins:

Try using a mechanical pencil with the lead removed, it works great if the pins are easily accessible.
Good idea! Sounds much easier then using jeweler tweezers like I did - and since you are not actually grabbing the pin to bend it you are less likely to accidentally pull it out when your hand shakes or you sneeze.
 

PigLover

Moderator
Jan 26, 2011
3,186
1,545
113
Update on noise. I updated the BMC, BIOS & ME on the two sleds with CPUs installed to current release and could tell right away after reboot that the noise levels had gone WAY down. Had to partially pull the other two sleds to be sure because they were still screaming away. For a server it is now very quiet indeed. Not living room quiet, mind you, but its much quieter than the DL180 G6 sitting on top of it in the rack.

When it first rebooted after the flash it did calibrate the fans on the sled, running each one up to full speed in turn. While the pitch was annoying with them running fast the overall sound level was still a bit below the DL180. Of course that was hearing one fan at a time. If it ran up all 12 fans at the same time it would be quite loud.

The firmware upgrade process was extremely easy. Just load files onto a USB stick and boot into the UEFI text boot menu. Update started automatically, asked a few confirming prompt questions, and then off it went.
 

dba

Moderator
Feb 20, 2012
1,477
184
63
San Francisco Bay Area, California, USA
Update on noise. I updated the BMC, BIOS & ME on the two sleds with CPUs installed to current release and could tell right away after reboot that the noise levels had gone WAY down. Had to partially pull the other two sleds to be sure because they were still screaming away. For a server it is now very quiet indeed. Not living room quiet, mind you, but its much quieter than the DL180 G6 sitting on top of it in the rack.

When it first rebooted after the flash it did calibrate the fans on the sled, running each one up to full speed in turn. While the pitch was annoying with them running fast the overall sound level was still a bit below the DL180. Of course that was hearing one fan at a time. If it ran up all 12 fans at the same time it would be quite loud.

The firmware upgrade process was extremely easy. Just load files onto a USB stick and boot into the UEFI text boot menu. Update started automatically, asked a few confirming prompt questions, and then off it went.
Every time to write about the server, it makes me wish I hadn't sold it!
 

PigLover

Moderator
Jan 26, 2011
3,186
1,545
113
This server keeps giving up more pleasant surprises...

Discovered the drive trays have built-in SFF adapters.

Each tray appears to have standard filler-blanks for mechanical stability and airflow control:


But upon further inspection they are not only fillers - but double as mounts for SFF drives / SSDs (including screws taped inside):


And SSDs mount just perfectly:


Note: Please don't laugh at me for stuffing a SanDisk SSD into such a nice server...I'm just testing!
 

dba

Moderator
Feb 20, 2012
1,477
184
63
San Francisco Bay Area, California, USA
Wow, now this is quite a system.

Does the system support big boy E5's or do you have really stick with something closer to like an E5-2630?
The power supplies are dual 1200 Watt and the motherboard docs say 135 Watts max TDP per processor (times eight of course). In other words: Use any E5 you want, except the 150 watt E5-2587W, confident that you'll run out of money before you run out of watts.
 

PigLover

Moderator
Jan 26, 2011
3,186
1,545
113
The power supplies are dual 1200 Watt and the motherboard docs say 135 Watts max TDP per processor (times eight of course). In other words: Use any E5 you want, except the 150 watt E5-2587W, confident that you'll run out of money before you run out of watts.
The MB specs support 135W TDP, but the chassis (H200WP) is only rated to run with 130W TDP processors. You can still get quite a lot of CPU in that power profile.

The PSUs are 1200W, but like almost all PSUs over 1kw they are current-limited on the input side. This means that when using 115v feeds each PSU is effectively limited to 1050W. You only get the full power rating when using 220V. Also - if you are concerned at all about maintaining redundancy you really can't count the power of both PSUs, you have to assume one of them will fail and ensure that you still have enough power. So you have about 1050W to play with.

So, with 115v feeds, lots of memory, 10Gbe or IB cards installed and any desire to maintain power redundancy you are effectively limited to 8x 60W TDP CPUs. With 220v input you can get to 8x 80W parts. If you give up power redundancy you can run with 130w TDP parts.

Intel also makes 1600W PSUs for this chassis. You can run 115W TDP CPUs with these and have full power redundancy. But with 115v feeds these PSUs max out at almost exactly the same limits as the 1200W units so for residential/SOHO users in the USA there isn't much point to them unless you plan to hire an electrician too.

Intel provides an Excel-based power calculator tool for this system on their support web site.
 
Last edited:

b3nz0n8

Member
Feb 18, 2013
71
50
18
Dell C410x project(s)

I have another STH reader with dibs on the second one, as soon as I can fix the pins and test. If they change their minds, I'll let you know.

If you have a rather rate and very expensive c401x, then have you seen my made-for-the-410x Dell c6145s for sale? Talk about a killer GPU platform - 16 CPUs, 128 DIMMs, 32 PCIe devices, 16 teraflops. The price is right and the pair of them already have four external PCIe ports for connection to that c410x - saving you the $2K it would cost to buy four external port PCIe cards. I dreamt of that configuration myself, though I never found a good source for "blank" c410x sleds. My plan was to use it for SAS cards as opposed to graphics cards, but the idea is the same.

Also, if you can live with a stripped-down, non-upgradable single-board version of the c6145 with only two heatsinks and no warranty, there is an eBay seller selling them for the very low price of $699 plus shipping.

Just arrived from the far reaches of nowhere. dba, PigLover, local and everyone...thank you very much for the follow up.

Currently, I'm gearing everything up for the 2011 platform, all workstations and rack gear. So that's the plan...currently.


Yes, often I've dreamt about those fantastic c6145's of yours....





Paired with the Dell c410x(s)...and the resulting damage I could do to my power grid...3...2...1....





But I'm on a mission...







Over the last few years I've picked up:

  • (20x) Nvidia Tesla P797 HIC Host Interface Card X16 PCI-Express + Cables
  • (1x) Dual Port Nvidia Tesla HIC Host Interface Card X16 PCI-Express
  • (8x) nVidia PCI-Ex16 2 Meter Cables
  • (1x) Dell C410x PCIe Expansion Chassis, complete with GPGPU carriers "tacos", + (4x) Dell Power Edge C6100 C410x 1400W Power Supply Y53VG LiteOn PS-2142-2L
  • (5x) Dell C410x PCIe Expansion Chassis without GPGPU carriers "tacos", + (4x) Dell Power Edge C6100 C410x 1400W Power Supply Y53VG LiteOn PS-2142-2L
  • (2x) Koolance ERM-3K3UC External Liquid Cooling Systems
  • ....and a whole bunch of other stuff








I read your posts about the c410x and will keep you informed if I run across these:




I'm inspired by all of those here on STH, the knowledge, ability and willingness to help...is simply amazing! I love this place!


But for now...let the hunt begin!! (+ ES Xeons + Ram):

  • Intel H2312WPJR Server






 

capn_pineapple

Active Member
Aug 28, 2013
356
80
28
perhaps a new thread for your setup b3nz0n8?

I've been looking for a similar setup for getting the company I work for to step up their CFD capability (which currently resides on a 2x6core xeon from the dark ages... or maybe 6 years ago... i don't remember as i shifted companies a few times)