HP S6500 8 Node Chassis

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Dajinn

Active Member
Jun 2, 2015
512
78
28
33
Even at 175 if these do have the extra qdr port preinstalled I think it could be worthwhile. But the other thing I still have no clue about is the QDR port in the i/o. Afaik you need to have the enablement chip installed for it to work and only guy is selling it for 115 a piece. Times 8 thats just not worth it for one port. And no one really seems to know.
 

tjk

Active Member
Mar 3, 2013
481
199
43
Even at 175 if these do have the extra qdr port preinstalled I think it could be worthwhile. But the other thing I still have no clue about is the QDR port in the i/o. Afaik you need to have the enablement chip installed for it to work and only guy is selling it for 115 a piece. Times 8 thats just not worth it for one port. And no one really seems to know.
I don't think the add-in card is IB, looks more like a SAS port, plus it says Port 1E on it and has the has label/logo on it.

So the on-board P2 10gb is active, the IB P1 port requires another 115 bucks to make it active? If so, not so great. I have dual-port IB's in mine that I picked up for under 100 bucks each, and at least gives me port/cable redundancy vs these with 1 on-board port.
 

Patriot

Moderator
Apr 18, 2011
1,450
789
113
I don't think the add-in card is IB, looks more like a SAS port, plus it says Port 1E on it and has the has label/logo on it.

So the on-board P2 10gb is active, the IB P1 port requires another 115 bucks to make it active? If so, not so great. I have dual-port IB's in mine that I picked up for under 100 bucks each, and at least gives me port/cable redundancy vs these with 1 on-board port.
The P212 is a 4i 4e raid controller, the 10Gb and 40Gb are on the mobo.
The enablement kit looks like a pair of resistors plugged into a 10pin port on the motherboard.
If they don't come with them, buy one and make the rest.
 

Dajinn

Active Member
Jun 2, 2015
512
78
28
33
Well, just got the invoice for the S6500 with infiniband and SAS cards in each node. $1124 shipped. What do you think about paying that much for so much 1366 hardware in 4U of space? That's 140 a node, w/o CPU/RAM, and all 4 PSU's and fans. No heatsinks.

I really am wanting to build up a datacenter class home lab, I don't know if I'll ever be able to really tax these servers to their potential but I'd love to be able to setup dedicated a exchange and AD server, and just compeltely go to town with failover clustering, SAN, iSCSI etc...but is that all worth 1124? I mean dropping CPUs and 8GB of RAM in each will up the price to about 2680 when all is said and done.

My other alternatives are either going for 1 of those 1u/2node supermicro builds(that have QDR IB built-in, 399 + 45 for shipping), or going for a 2U/2node or 2U/4node supermicro builds where I'll have to buy IB cards for them. Or luck out and find a gooddeal on a fully configured C6100.
 

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,625
2,043
113
No. Don't do that. Too much $ for 1366, way too much power to run it all at home all the time too like you want too imho.

Wait for an E5 deal if you want 2node, 4node+. Or look in for sale section here, I have some 4 nodes even for sale (E5 v1/v2) NIB, with CPU and probably enough RAM to set you up.

If a ConnectX3 will do what you want iirc they're 80-120 on ebay.
 

Dajinn

Active Member
Jun 2, 2015
512
78
28
33
Quick question, aside from using starwind virtual san can you make 4 different machines iscsi targets and direct connect each one to each other using qdr?
 

H8ROADS

New Member
Oct 27, 2015
17
5
3
Denver, CO
Sorry to resurrect an old thread, but I thought I'd weigh in here after recently purchasing one of these. In the event someone still has this interest, a few other thoughts that I didn't see mentioned above:

The unit is definitely loud, but does fine in my basement. At full fan though its definitely a Boeing 747 and so that is something to be aware of. I'm running it half loaded with 4 blades powered off and its been relatively quiet in my house. I can hear it spin up now and then when its dead quiet but otherwise it goes unnoticed with normal house noise.

Advanced keys are an absolute must for the iLO otherwise loading these things is a PITA.

The cost for a home lab I thought was really great considering some other options. I have 6 Dell R710/715s and comparatively the power usage was off the charts compared to the s6500. I now keep only one Dell R715 and T610 around as aux servers but usually not powered on. The s6500 has 4x1200W PSs that max out at 900W at 110v, and with only 4 blades running iLO usually reports power usage in the ~300W range. I find this way more acceptable than the alternative for my house electric bill.

Everything needs updating firmware-wise and while HP has made this an unjoyous clusterfuck - I actually managed to find everything I needed in a matter of minutes via alternative sources and google.

Overall, I'm very pleased with the result of my home lab build so far. Just got my IB switch installed and am waiting on cables, but over ethernet things have been working great. Having either 10G or 40G IB is a big bonus for the G7s. The G8s look nice too, and supposedly you can mix and match. Might get a 2U G8 to use NVIDIA vGPU later, if that will work.

Used L5640s for the procs and found plenty of affordable RDIMMs on ebay to put 12cores and 96GB in each of my 4 blades at a cost of a few grand, but I could have easily spent the same amount on 2 full size servers. All in I'm at about 4k with the chassis, mem, and cpu with expansion capabilites and low power usage. I call it a win.
 
  • Like
Reactions: b3nz0n8 and Dajinn

Dajinn

Active Member
Jun 2, 2015
512
78
28
33
What was the total cost of the server? And did the nodes themselves all have the upgrade keys installed for the infiniband port?
 

H8ROADS

New Member
Oct 27, 2015
17
5
3
Denver, CO
What was the total cost of the server? And did the nodes themselves all have the upgrade keys installed for the infiniband port?
The chassis itself was $1000. I purchased 2 at 1800 and then 200 for freight shipping. Another forum member has the other. The Advanced keys are not included, and aren't required for the IB port - just for the advanced iLO functionality. The IB/10G is built in, but you can only use one or the other, not both. It does require a dongle to connect to the board, but ours included it.

 
  • Like
Reactions: Dajinn

Dajinn

Active Member
Jun 2, 2015
512
78
28
33
Okay so it did include the infiniband enablement kit? I was under the impression that the 10Gig worked without anything additional but all the spec sheets for the G7 nodes specify the requirement of an "optional" upgrade kit if you want to use IB. And I'm assuming the node sleds had at least one of the required trays for you to be able to utilize 2.5" or 3.5" drives?

HP InfiniBand IB Enablement Board 620760-001 614841-B21

Is that the piece that's used for the IB or are they incorrectly using a picture of the iLO key? I never got great clarification on this.
 

H8ROADS

New Member
Oct 27, 2015
17
5
3
Denver, CO
Okay so it did include the infiniband enablement kit? I was under the impression that the 10Gig worked without anything additional but all the spec sheets for the G7 nodes specify the requirement of an "optional" upgrade kit if you want to use IB. And I'm assuming the node sleds had at least one of the required trays for you to be able to utilize 2.5" or 3.5" drives?

HP InfiniBand IB Enablement Board 620760-001 614841-B21

Is that the piece that's used for the IB or are they incorrectly using a picture of the iLO key? I never got great clarification on this.
Correct, we reached out to the seller Peter @ theserverstore in TX and verified before we purchased. Regarding the sleds, it had none, but a call to Peter and he sent me a bunch of them (along with another PSU to replace a faulty DOA one) for a few extra bucks. It has all the trays, just not the sleds.

I'll ask my friend to weigh in on the IB dongle, he's already hooked his up and messed with it quite a bit including finding the right drivers. I'll ping him the URL and have him weigh in, but IIRC the enablement board is already on board for our particular chassis.
 

Dajinn

Active Member
Jun 2, 2015
512
78
28
33
That's cool, thanks for the info. Yeah I was heavily considering an S6500 loaded chassis from theserverstore for a while and just decided to go with R610s. Sometimes I regret not getting the S6500 or at least starting off with a C6100. You could build a really cool failover storage cluster easily using the HBAs they included with some DAS units. What are some of your plans for the chassis? Do you have any plans to wire it up to a 20-30 amp outlet?
 

DaSaint

Active Member
Oct 3, 2015
282
79
28
Colorado
Weighing in as H8Roads Mentioned... We got those chassis, Mine costed less because i had all the gear from other Chassis C6100 and other Whiteboxes that were using same Gen Tech just wanted to consolidate it.... (all i needed was the chassis)

The onboard 10Gb should work without the dongle the dongle is for the use of the Infiniband Side (QSFP Cabling) the SFP for the 10Gb should work without the dongle...

As H8Roads Mentioned there are some catches with ESXi, it doesnt play nice with the Ofed 2.3.3 driver although it does identify both it doesnt play nice inside the logs, its recommended to use one or the other... Ethernet (SFP) or Infiniband (QSFP) but not both at the same time.

FYI as well the Infniband side (QSFP) is not Ethernet capable from what i found out.... It does only infiniband... I use the Ofed 1.8.2.4 With the firmware that we got from HP i think its 2.9.1530 the initial installed was a 2.8 Firmware which would have had a bad time... these are considered ConnectX-2 Based cards - MT26438 - so luckily for me that means Solaris (OmniOS) Supports this natively iir!

The IB switches we both got are in the same generation but we also got a good hookup on them, the OpenSM side will work great for the IB infrastructure... Ethernet side i think u can use pretty much any SFP based switch...

Currently running ESXi 6.0U1 with OFED 1.8.2.4 and all is stable right now at 40Gb Running backended vMotion, vSAN, NFS, etc. for main storage...

Ill have pics later :)
 
  • Like
Reactions: qamaro and b3nz0n8

Dajinn

Active Member
Jun 2, 2015
512
78
28
33
Does it do IP over IB? If not that'd probably be a deal breaker for me.
 

DaSaint

Active Member
Oct 3, 2015
282
79
28
Colorado
Anyone wanting more info on this chassis can take a gander at my blog post, ill get more detailed down the road with the individual blade types that i have (SL390s and SL250s)

Updating the HomeLab – The HP S6500 – VDI Tech Guy

@Dajinn - Wanted to update you, i reverted to the ethernet side of the SL390s blades due to PSODs caused by the OFED driver.... Mellanox hasn't really updated that code past 5.5 and as i get deeper into 6.x code it seems to become more problematic... the Onboard 10Gb with the Native drivers (Currently using 3.0 with HP Disk) seems to be stable so far... But time is the culprit usually for this... i was getting PSODs weekly and they led to the OFED driver being the cause.