Whoohoo, I get to chime in here on these absolute peices of shite. Run, do not walk, away.
Here are the issues I've seen at 3 companies with these turds:
* AMM's randomly failing, even if redundant, where the management simply can't be reached. Then a few hours or days later it's working. That's AWESOME the day the servers are down
* Blade slots going down but still working, you just can't manage them. No connectivity, no power cycle, etc.
* Blades in them, at least using Emulex 10GbE adapters, sometimes "lose their ISCSI settings" - just revert back to "NIC" vs "ISCSI"for no good reason
* Don't even get me started on the Emulex PoS'....nothing better than doing work for a company and being told "yeah, sometimes you have to reset the adapter and reboot 2-3 times before it takes...."
* The 1GbE Cisco switches - we tried the latest firmware, and then 19 versions back during a maintenance window. Some bug that did not allow them to be reached on the management IP, only console or via AMM. (See aforementioned AMM issues). Cisco and IBM noted it as a "known bug" - for 19 freaking versions????
* Everything IBM is "feature key". Unlike a Dell or SuperMicro, you want RAID5? Key. IMM enhancements? Key. Caching on RAID? Key. ISCSI on NIC? Key. Don't replace that server or flash anything without noting those keys first.
Now, the hardware itself....
* There are 2 power domains in the chassis - left and right.
* Each power domain has redundant power
* Each of those power supplies is 208V/30A - so you're running 4 208V/30A of power to the chassis. In the datacenter here in Edmonton, that was running 1200$ for the primary and $800 for the secondary, or $4000/power/month on the bloody things (Datacenter bills on drop/whip, not usage). That was AWESOME when the previous guy decided that 8 servers per chassis made sense. $96,000 in *power* (yearly) - even if they were turned off, because we were charged by drop. Fun stuff. No wonder I couldn't get anything new.
* There are 14 blades, but as you start going up in CPU and slots, you start running out of power. Populate them with (going from memory) blades with >95W TDP, and you can use 12 slots - 6 per power domain. >135W TDP, and you're doing 10 per chassis.
* That chassis has 6 handles on it for a reason. Two people isn't enough to really lift it well. You're going to want 4 people to put it into a rack - 3 to lift, one to position.
* The switching options are insanely priced. When I replaced the one chassis, while we were looking to migrate to 10Gbe ISCSI, I was able to pay for 5 replacement Dell R620's (2x8C/384GB/4x10GBE) with the savings from not buying the 2x chassis 10GbE switches. The ToR 10GbE and such was still needed of course. Oh and those 5x R620's ran on 2x 15A circuts, which cost us $400/month in power vs $4000. (3 year savings, on paper - $130,000 - that buys a LOT of 1U rack servers....)
Those are the "facts" I can share. My "opinions" are this:
* The power requirements are nuts. Don't do it.
* The weight is nuts.
* The lock in is crazy. Forget buying the best bang for buck SAS chassis, and some 1U servers, and 2U from this server - you're married, you get what "the wife" tells you.
* The expansion options are horrid, and expensive. You will NOT be buying $40 Brocade 1020's or a $100 RAID card for these things.
* The blades are extremely thin, so you need LV 1.35V RAM - so now you're extremely limited on height and power of RAM. Also due to how they sit in the servers (HS22/HS21/HS23/HX5 I've seen), any thing with a heat spreader and such on it is unlikely to work - nay, even FIT.
* Want to put 3x 10GbE cards in it? Nope. Some 40GbE Infiniband you found? Nope.
* 2x 2.5" disks per. So mirrored pair of 10K SAS or SSD - or SAN connecitivy
* RAID card is LSI 1068E. Here's the fun part. The LSI 1068E is on VMware's HCL. The HS23 is on the HCL. When I had to open the case with VMware, they very kindly pointed out that *IBM's* LSI 1068E in an HS23 was specifically unsupported. If you had local disks, you'd get purple screens in... v5.1 I think, might have been 5.5
* There's a bug on the QLogic 4GBit 2pt FC in combination with the LSI 1068E _even being present_ that will randomly create an APD situation, if ALUA is enabled.
* Blades are ONLY good if you're dealing in massive scale - eg: 80+ servers, multiple chassis. Otherwise, your fault domain is massive if you only have 1 or 2 chassis. Also you end up not having/affording a DEV/TEST type setup, so you have no safe way to test firmware updates, etc, without doing it with the potential to gibble up 13 other systems. That's always fun. Every environment I've seen had blades with 32-128GB, MAYBE 256GB. Being a VMware guy, I've always pushed for them to be replaced with 1U rack servers. R620's with 24x32GB for 768GB displaces a ton of 32GB HS22's or 96GB HS23's - both of which only have 8 DIMM slots vs 12-16-18-24 on a decent 1U DP rackmount. That alone, especially for home lab, where you might have access to "worthless" 4GB DIMM's is the killer feature for me. Companies that have upgraded are practically giving away 4GB DIMM's, and if you only have 8 slots, you're done at 32GB. (And again, that's if it's LV 1.35 slim ram that you got for cheap - if not, well... sigh).
I've been dealing with these turds for 3 years now. I have no clue how IBM stays in business.
Dell R610 or C6100's for me, all day long if I need older, R620/R720 if newer. That HP S7000 chassis is okay, but it's too many servers in too big a chassis, with limited options. The C6100 is good if you don't want a lot of expansion, I love mine. But if you want to start getting fancy, and adding more NIC's or RAID, not having 3x PCIe slots starts becoming a hurdle. Just depends on what you're playing with.
Sorry for going off on a rant. I hate these things so much. I wouldn't take one if I was given one for free, other than to gut the CPU's/RAM, and liquidate the rest for money to spend on something good.
Please don't do it. You'll make me very very sad.