C6100 DCS vs HP S6500 - initial impressions

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Mech

New Member
Dec 8, 2015
22
11
3
55
Hi !

I figured I'd post here as my friends all think I'm crazy. Hopefully, you guys will find it more interesting...

I'm currently running 2 x DCS 6100 4 node and 1 x HP S6500 8 node in a 24U IBM Rack at home.
I have the rack in a spare bedroom with both central and a stand-alone A/C, ran a separate 20 amp circuit for the hardware. I'm located in Phoenix AZ - so heat is always an issue, however as long as the house's central A/C is on, it seems to be ok. This is WAY too noisy a setup to co-located with living beings. Surprisingly, the 1 6100 thats currently running seems much noisier then the S6500.

The C6100's are of the DCS flavor, which hasn't caused me much trouble aside from finding drive trays, and I think the FCB is the pic16 and didn't flash. Bios and BMC flashed fine on all 8 nodes. The drive trays have been an issue as mine are "DP/N 0D273R" which I guess is NOT totally compatible with the "07JC8P" part. (the D273R part has a single indicator on the front, and 1 fat/1 skinny prong, the 7JC8P has 2 indicators on front and 2 fat prongs on the back). If you need to order trays for a 6100 - check which you need as ebay sellers generally consider them equivalent. The machines are:
24 bay 2.5" drive, 4 node with 24G 2 x L5630, and 1 x 256G SSD.
Given the problems I've had getting trays, 1 chassis is currently powered off - driveless

The S6500 I got from pdneiman at MET International - he seems to be fairly well known around here. The machine is BIG - 280 pound shipping weight. I was actually worried about racking it on my own. However, after pulling all the sleds and power supplies - the chassis itself was fairly light. The chassis is configured with 8 x SL170 G6 Nodes. Each node is 2x E5520, 24 GB RAM, 2 x 120G SSD (mounted with an Icy Dock), and 1 Mellonix 10GE SPF Card.

Rack mounting everything in 24U was a pain for cabling - I have 3 24 port switch (IPMI, LAN, and 'test network'). Seems I placed everything in EXACTLY the wrong place. Cables are either 1" too short, or 1' too long! Still it fits, and doesn't look too horrible. The HP is MUCH longer then the 6100's and stick out the back of the rack at least 6"

The HP nodes are currently my favorite. The sleds are SOLID - heavy, rigid and just feel well made. At ~ $225/node including 10GE (no HDD) they're a great deal! However, the S6500 does limit your storage options as its uses internal mounts for drives and supports fewer drives per node. Also, these have 'basic' HP iLO - so about all you can do is turn them on and off via IPMI. I'm still trying to get iLO 'SoL' working for console access. Port layout is a bit nicer too, you can easily reach the ether,video and USB ports (my 6100 has sled handles in the way of the ports, etc) The S6500 has 'front' facing ports - which seemed nice when I racked it - but having seen how the cabling got ugly, I'll probably mount it 'backwards' next time I redo the rack. One minor nit - the ethernet interfaces are labeled backwards on the front panel - ethernet 2 actually attempts DHCP/PXE before ethernet 1, and the jack labeled ethernet 2 shows as eth0 in Linux.

The C6100's had the 2 top nodes of each chassis 'lose' the BMC after about a year. A power fail apparently confused them and I had to reflash the BMC to get them back. Other then that and the drive tray issue, they've worked well. Lots of people here have discussed the 6100's so I won't repeat it all. I will say the dell IPMI, while dog slow is much more featureful then the 'basic' HP iLO (HP's licensed iLO is much nicer then the Dell in my opinion though - if you can get HPs with a valid iLO license, its REALLY nice). I gave up looking for the fancy dell 6 drive SATA cable, and just ordered some SATA cables from mono price. Once I source trays, I'll connect 4-5 bays per node.

If I was going to do this all over again, I'd probably replace the C6100 nodes with used HP DL2000's...

I'm building out 1 of the C6100's for use as 'internal' servers with: FW (shorewall), Foreman, and GitLab. It will also run a 3 node CEPH cluster to provide a small amount (~ 1.5 TB) of mirrored storage. The firewall is running on Debian 8, while the other 3 nodes are ubuntu 14.04 LTS.

The remaining 12 nodes (8 HP, 4 Dell) are for testing various OS/Cloud stacks/etc. So far, both the HP and the Dell nodes have run Ubuntu and CoreOS without difficulty. I'm hoping to play with both OpenStack and Kubernetes on these, including a HA openstack setup.


Now...if I could just find a cheap 24 port 10GigE switch.....
 

Chuntzu

Active Member
Jun 30, 2013
383
98
28
Got to love those hot summers, I feel your pain there. Did the exact same set up with central air and portable ac to the "server room".
 

Mech

New Member
Dec 8, 2015
22
11
3
55
very nice color coding. :)
Thanks... I wish I had placed things a bit better in terms of cable length...
I didn't show it in the picture, but the right side of the rack has so much cable in the runs,
I'm not sure the side panel will fit back on :p