IBM Bladecenter

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Quartzeye

New Member
Jul 29, 2013
16
0
1
I currently have a C6100 and a 24-Bay Supermicro SAN. Loved them both but has anyone checked out the IBM Bladecenter chasis's?

After doing a little searching, it would appear that you can get quite a lot of bang for the dollar with the Bladecenter. It appears that there are plenty of modules, parts, and blades available that are fairly cheap. Some seem to be much cheaper that comparable parts for other traditional systems.

Am I missing something? I am wondering why the enthusiast market hasn't picked up on these and began to adapt them for there own uses. It would seem to be the ideal platform for both virtualized and hadoop environments. Is it the unfamiliar nature of the hardware and architecture that may be limiting peoples appetite for tinkering with the Bladecenter or is there something else that I may be overlooking.
 

MiniKnight

Well-Known Member
Mar 30, 2012
3,073
974
113
NYC
Most IBM bladecenter is more expensive and older. If you're looking 5400 series or older generation then it does look attractive. But you're dealing with higher power consumption with the blade chassis and older parts.
 

Hank C

Active Member
Jun 16, 2014
644
66
28
if you are looking for high density, i would recommend dell c6220 or intel h2x00 series.
 

Patriot

Moderator
Apr 18, 2011
1,451
792
113
Blades are going out of style... I wouldn't sink much money into them as future support is quite uncertain.
 

spyrule

Active Member
I think a lot of it is that the chassis and modules are cheap, but the actual servers are over priced. Secondly for home use, they draw a huge amount of power, many if not most are 240v powered, they are loud as hell, and to top it all off many modules require very specific firmware versions to work with each other, and many don't have fully matching firmware versions, so you end up having to buy a new module to work with a newer firmware version on another module (many times you cannot rollback a firmware update as well). That's why we ended up getting rid of our hp blades after just a few years. They also only really make sense if your running a high density of them. Otherwise the cost/watt is simply too high.
 

bds1904

Active Member
Aug 30, 2013
271
76
28
Blades make sense for 7+ servers, that's it. Every single one I have dealt with is 240V, quad 1200w+ power supplies. A lot of the ones for sale are actually 208V 3-Phase, you really have to watch out for those because there is no way you are going to be powering that at home, unless you have a farm. Just powering on the bladecenter at idle with no blades powered up is like 400w. Each server from there is going to be 100w-170w at idle.

The big thing back in the day was that you could get the FC switch built into a single module, so with 2 modules (4 FC connections total to the chassis) you had a redundant setup. Those modules are pretty cheap now, but it will still leave you with lackluster networking. Most blade nodes you see only have 2 GbE connections and can't be expanded. The ones that have 10Gb are still pretty cheap but the interface for the blade chassis are still super expensive. No price advantage there at all. IPMI is also a PITA.

You would be better off checking into a HP S6500 8 Node Chassis. Check out my thread.

https://forums.servethehome.com/index.php?threads/hp-s6500-8-node-chassis.3887/
 

TuxDude

Well-Known Member
Sep 17, 2011
616
338
63
My only experience is with the HP C7000 chassis (similar in size and capability to the IBM Bladecenter), and I also agree that it is probably not suitable for use in a home. If you really want to play with blade technologies at home there are a few products out there that are targeted more at branch-office than datacenter use, the two I know of are HP's C3000 chassis (8 blades, compatible with all the same blades and network options as the bigger C7000, but uses single-phase PSUs that will run at 110V), or the Dell VRTX (4 nodes + a shared storage option, does not support all of dell's blades).

Also keep in mind that these blade solutions will mostly not accept any kind of standard expansion card - the expansion slots are proprietary form-factors with proprietary connectors. The VRTX is an exception as it has a few regular PCIe slots in the chassis that can be mapped to individual blades.

Also, HP now requires a support contract in order to download firmware/driver updates for most of their hardware. So if you don't know someone who can download those things for you you may want to stay away from their gear.
 
  • Like
Reactions: Bradford

NetWise

Active Member
Jun 29, 2012
596
133
43
Edmonton, AB, Canada
Whoohoo, I get to chime in here on these absolute peices of shite. Run, do not walk, away.

Here are the issues I've seen at 3 companies with these turds:
* AMM's randomly failing, even if redundant, where the management simply can't be reached. Then a few hours or days later it's working. That's AWESOME the day the servers are down
* Blade slots going down but still working, you just can't manage them. No connectivity, no power cycle, etc.
* Blades in them, at least using Emulex 10GbE adapters, sometimes "lose their ISCSI settings" - just revert back to "NIC" vs "ISCSI"for no good reason
* Don't even get me started on the Emulex PoS'....nothing better than doing work for a company and being told "yeah, sometimes you have to reset the adapter and reboot 2-3 times before it takes...."
* The 1GbE Cisco switches - we tried the latest firmware, and then 19 versions back during a maintenance window. Some bug that did not allow them to be reached on the management IP, only console or via AMM. (See aforementioned AMM issues). Cisco and IBM noted it as a "known bug" - for 19 freaking versions????
* Everything IBM is "feature key". Unlike a Dell or SuperMicro, you want RAID5? Key. IMM enhancements? Key. Caching on RAID? Key. ISCSI on NIC? Key. Don't replace that server or flash anything without noting those keys first.

Now, the hardware itself....
* There are 2 power domains in the chassis - left and right.
* Each power domain has redundant power
* Each of those power supplies is 208V/30A - so you're running 4 208V/30A of power to the chassis. In the datacenter here in Edmonton, that was running 1200$ for the primary and $800 for the secondary, or $4000/power/month on the bloody things (Datacenter bills on drop/whip, not usage). That was AWESOME when the previous guy decided that 8 servers per chassis made sense. $96,000 in *power* (yearly) - even if they were turned off, because we were charged by drop. Fun stuff. No wonder I couldn't get anything new.
* There are 14 blades, but as you start going up in CPU and slots, you start running out of power. Populate them with (going from memory) blades with >95W TDP, and you can use 12 slots - 6 per power domain. >135W TDP, and you're doing 10 per chassis.
* That chassis has 6 handles on it for a reason. Two people isn't enough to really lift it well. You're going to want 4 people to put it into a rack - 3 to lift, one to position.
* The switching options are insanely priced. When I replaced the one chassis, while we were looking to migrate to 10Gbe ISCSI, I was able to pay for 5 replacement Dell R620's (2x8C/384GB/4x10GBE) with the savings from not buying the 2x chassis 10GbE switches. The ToR 10GbE and such was still needed of course. Oh and those 5x R620's ran on 2x 15A circuts, which cost us $400/month in power vs $4000. (3 year savings, on paper - $130,000 - that buys a LOT of 1U rack servers....)

Those are the "facts" I can share. My "opinions" are this:
* The power requirements are nuts. Don't do it.
* The weight is nuts.
* The lock in is crazy. Forget buying the best bang for buck SAS chassis, and some 1U servers, and 2U from this server - you're married, you get what "the wife" tells you.
* The expansion options are horrid, and expensive. You will NOT be buying $40 Brocade 1020's or a $100 RAID card for these things.
* The blades are extremely thin, so you need LV 1.35V RAM - so now you're extremely limited on height and power of RAM. Also due to how they sit in the servers (HS22/HS21/HS23/HX5 I've seen), any thing with a heat spreader and such on it is unlikely to work - nay, even FIT.
* Want to put 3x 10GbE cards in it? Nope. Some 40GbE Infiniband you found? Nope.
* 2x 2.5" disks per. So mirrored pair of 10K SAS or SSD - or SAN connecitivy
* RAID card is LSI 1068E. Here's the fun part. The LSI 1068E is on VMware's HCL. The HS23 is on the HCL. When I had to open the case with VMware, they very kindly pointed out that *IBM's* LSI 1068E in an HS23 was specifically unsupported. If you had local disks, you'd get purple screens in... v5.1 I think, might have been 5.5
* There's a bug on the QLogic 4GBit 2pt FC in combination with the LSI 1068E _even being present_ that will randomly create an APD situation, if ALUA is enabled.
* Blades are ONLY good if you're dealing in massive scale - eg: 80+ servers, multiple chassis. Otherwise, your fault domain is massive if you only have 1 or 2 chassis. Also you end up not having/affording a DEV/TEST type setup, so you have no safe way to test firmware updates, etc, without doing it with the potential to gibble up 13 other systems. That's always fun. Every environment I've seen had blades with 32-128GB, MAYBE 256GB. Being a VMware guy, I've always pushed for them to be replaced with 1U rack servers. R620's with 24x32GB for 768GB displaces a ton of 32GB HS22's or 96GB HS23's - both of which only have 8 DIMM slots vs 12-16-18-24 on a decent 1U DP rackmount. That alone, especially for home lab, where you might have access to "worthless" 4GB DIMM's is the killer feature for me. Companies that have upgraded are practically giving away 4GB DIMM's, and if you only have 8 slots, you're done at 32GB. (And again, that's if it's LV 1.35 slim ram that you got for cheap - if not, well... sigh).

I've been dealing with these turds for 3 years now. I have no clue how IBM stays in business.

Dell R610 or C6100's for me, all day long if I need older, R620/R720 if newer. That HP S7000 chassis is okay, but it's too many servers in too big a chassis, with limited options. The C6100 is good if you don't want a lot of expansion, I love mine. But if you want to start getting fancy, and adding more NIC's or RAID, not having 3x PCIe slots starts becoming a hurdle. Just depends on what you're playing with.

Sorry for going off on a rant. I hate these things so much. I wouldn't take one if I was given one for free, other than to gut the CPU's/RAM, and liquidate the rest for money to spend on something good.

Please don't do it. You'll make me very very sad.
 
  • Like
Reactions: Patrick and lmk

TuxDude

Well-Known Member
Sep 17, 2011
616
338
63
NetWise - another Edmontonian eh? That power price is rather rediculous - what datacenter was that at?

I've been a fan of blades for VMware environments for a while now (HP in my case), though we host it in our own small datacenter so density is more of an issue for me and power is pretty much a non-issue. When I first got into blades, after putting 7 into a chassis that could take 16 they had paid for themselves just in the cost-savings of not needing optics for all the FC connections.

Nowadays I am starting to look back at rack servers again, but 2U over 1U. I am wanting more flexibility for future options that may involve local storage (software-defined-storage type stuff) or more/different PCIe expansion (PCIe flash cards, GPUs for VDI, etc.). I would still prefer blades over 1U servers.
 

NetWise

Active Member
Jun 29, 2012
596
133
43
Edmonton, AB, Canada
Rogers downtown - Primus/BlackIron/etc. The power is stupid, but if you know how to play the game, it's not so bad.

You bring up some other good reasons I dislike blades - no GPU, no Teradici cards, no PCIe flash (in most cases, some have very specialized versions).

I do agree, if you need VDI GPU's, then 2U is a requirement. But pretty much ONLY then. I've been using Dell R620's and with 2x 10GbE LOM, 2x 10GbE add in, that leaves 2x PCIe 8x slots - that's plenty. The same 10U rack chassis that holds 14 blades - but maybe only 12 or 10 because of power draw, holds 10 of these. Add to that the fact that I can move "part of a chassis" - 2 servers can go to lab, be given to staff, sent to a branch office, etc. What do I do with 9 old blade servers? Nothing. Hold down a corner of a desk. If I had 9 1U rack servers - well, I have TONS of options.

I can agree that on the blade chassis, you're coming up from the chassis switch to the Top of Rack or Core, and probably only have 2 or 4 cables to run for that and SFP's to need. But equally, each chassis HAS to go to Top of Rack through those potential choke points. I'm seeing that now with 6x Blade Chassis that have 2 ESXi (among other physicals - eww) per chassis. To do vMotion/svMotion or just inter host traffic, it has to go host->chassis switch->ToR->Core(maybe) and back. That's not so good. Would be better if you just 40x 1U servers going up to a 48-96 port ToR and then everything is local without that choke. It's not so bad for data, but for ISCSI/NFS if you have 14 blades under heavy usage, that are only connecting up to the ToR at 2-4x 10GbE, you may not be getting the bandwidth you're hoping for. Though, if your NetApp only has 2x10GbE coming out of it, the discussion is moot :)

I can rant all day long about these, if anyone's willing. I hate them so much. The one I was given, was very therapeutic to just take a hammer to. There were 4 of us that took out our frustrations.

Blades CAN be good, for the right environment. But if you don't have 50+ servers, or don't have an electrician, they're probably not right....
 

TuxDude

Well-Known Member
Sep 17, 2011
616
338
63
We've currently got dual 30A 3-phase PDUs in each rack and I've never had an issue with an HP chassis limiting me based on power - I can put 2 chassis / 32 blades into a rack right now, and can add two more 30A 3-phase PDUs to support 4 chassis without having to call the electrician - though if I do that to more than a rack or two I will need to add the hot-isle containment there to handle the heat output. And we don't run our blade enclosures to top-of-rack - they are wired direct right back to the core.

I agree that blades carry their limitations - I just also see 1U servers as almost as limited. You get a few extra DIMM slots with the extra width, and 2 or 3 standard PCIe slots (though probably not all FL/FH) instead of the pair of mezz-slots I get in a blade - but thats still not a lot of flexibility. 2U can do anything. If I'm going to live with limitations, I'll take the advantages blades bring as compensation for those limits.
 

NetWise

Active Member
Jun 29, 2012
596
133
43
Edmonton, AB, Canada
Agreed, modern blades have their place.

But typically the blades will be 8-12 DIMM and rack mount will be 18-24 - in a dense virtualization environment, that's a massive difference. When adding another host for 12 DIMM slots more means licencing for VMware, Windows DataCenter, backup products, management products, and $1000 or something stupid SFP+ transceivers - anything one can do to scale up vs scale out, is drastically beneficial.

99% of all the virtualization systems I deploy are 2x 10GbE data/2x 10GbE Storage, and that's all the expansion they're ever going to want. If there was a need for VDI, then those would be placed on 2U boxes with Grid cards, purposely segregated for the VDI environment.

Back to the point in general though - BladeCenters are horrible for home lab :)
 

TuxDude

Well-Known Member
Sep 17, 2011
616
338
63
We both agree - Blades (not just IBM specific) are bad for home use.

But requirements and problems vary by environment. We're educational and get massive discounts on most software but very little on hardware. Buying an extra VMware license isn't that bad and Windows Datacenter (and any other MS product) is practically free. It's those $1000 SFP+ transceivers that really hurt - 4 links needing one on each end is $8000 saved by using a blade that talks to switches over a midplane board.
 

NetWise

Active Member
Jun 29, 2012
596
133
43
Edmonton, AB, Canada
You didn't mention education or free software ;). That changes the scope! Still, I opt to use SFP+ TwinAx to get to the TOR switching and avoid the SFP+ transceiver as much as possible. All those aforementioned rack servers use them, significantly cheaper. The only things I used the transceivers for was Nexus to NetApp storage as both demanded their own. Had no such issues with copper from Force10 to NetApp though...