HP RAID Controllers on the Intel 2600GZ Platform (P212 and P420)

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Patrick

Administrator
Staff member
Dec 21, 2010
12,511
5,792
113
Just as a note to folks who were wondering, dba and I spent a good 2+ hours today trying to get the HP P420 and HP P212 controllers working on an Intel 2600GZ platform. Tried just about everything we could think of to no avail.
 

mobilenvidia

Moderator
Sep 25, 2011
1,956
212
63
New Zealand
Does a LSI card work, or has Intel crippled their latest E5 and E3 boards.
Someone with a 1200 'R' board, LSI branded card no work, on exact same non 'R' board it does

Can you get the RMS25kb080 to work on it, then flash with LSI FW and try on another Mobo ?
 

TheBay

New Member
Feb 25, 2013
220
1
0
UK
That's bizarre!, could you see option rom at all?
P212/P410/P420 etc use PMC Sierra chipsets, never had a problem on anything with them apart from having to flash firmware in a HP server.

I wonder if the BIOS has some form of Whitelist/Blacklist?, or it pulls a PCI-e Pin low/high on a non-intel card.
 
Last edited:

Patrick

Administrator
Staff member
Dec 21, 2010
12,511
5,792
113
We pulled out the Intel-LSI SAS2208 mezz card and the dual 10GbE mezz card so that the only expansion card was the HP P420 or P212 and still had issues. We saw the firmware of the cards but then would get minutes of black screen. Removing the cards allowed the system to boot.

May be the Intel server. dba took home another brand LGA1356 platform to see if it works. One thing we did confirm really quickly is that the Dell C6100 heatsinks work without issue on the dual LGA1356 platform we tried.
 

mrkrad

Well-Known Member
Oct 13, 2012
1,244
52
48
oh I did test this on a hp elite 8300 which is a very low end desktop.

1. ensure active cooling - this card execute thermal shutdown in about 10 minutes of open-air (zero airflow). even though it runs cooler than the lsi cards, it does not tolerate cooking itself -
2. force pci-e 2.0 ?
3. if intel storage controllers and dell controllers have issues (the ole block bin 5 and 6 of the pci-e card smbus trick, google it usually perc 5/i or 6/i) - perhaps the hp is similar. odd that is works with a crappy core i3 box.
4. I assume you are using a semi-recent controller. We tested this with 5600 series cpu's. I don't use anything older in production so I can't tell.
5. I separate the raid controller IRQ to 5 usually, keeping the nic's around 7 and the usb/idrac/ilo junk on higher 10/11 - call me old school but i still believe it matters.
6. hp cards absolutely hate having their cache board/supercap removed, however i've seen them get stuck in which case you can boot it without that. BE MINDFUL the cache board and controller are lit when the supercap or battery is connected. laying it down on a conductive surface is a guaranteed way to kill it. I've got a few that have been repaired for this reason specifically (50 bucks, why not?) - for experimenting with better heatsinking.

7. did i mention it will last 10 minutes tops in a open-air case without 250-300lfm airflow before executing a shutdown of the controller? it will log an IML event in IPMI/ILO as well. Even in GEN8 servers, this card must be placed in slots that have exceptional cooling. ie in my Z400 i have to throw a heat pump on it or an adjacent cooling slot. aka a big ole white fan blowing on the card. The average idle temp is 58c in a G7, the lsi 9260 and 9266 idle temp at 68c in the same slot.

btw I am using samsung 840 pro which provide temperature of the ssd to the controller. I haven't tried any other drives.

8. P410 controllers must always be connecting to something. leaving the controller ports dangling has been problematic. mostly the on-board ones. Probably errata.

The blue boards are early prototypes and the one i've got had rig wires on it due to damage (see above laying the card down while powered by supercap).

I'll be glad to test any board you have but I'd suggest remove supercap, remove cache board, insert into x8 slot (pci-e 2.0? force 2.0 if possible).

When smartarray is pissed it tends to spin bbs style \|/\-/ many times. If there is a logical volume to power up, each one will usually get one spinner quickly, if you see a bunch of spinners with no logical volumes, it's pissed (hung,bad cache board, firmware bug, slot issue).

If you got these where I think you did , PM me and I can help you get an advance swap RMA to you, but I would be putting my name on your returning the card promptly that is defective. Or you can send it to me and i'd be glad to test it, and RMA it. 100% legit since Like others some of us are on the line with HP for doing the right thing.
 

TheBay

New Member
Feb 25, 2013
220
1
0
UK
Also P212/P410 will hang with some Samsung drives, they don't work with F2's I know for a fact.
 

mrkrad

Well-Known Member
Oct 13, 2012
1,244
52
48
not at all but x24 pci-e 3.0 slots? that's interesting. (googling spec).

Is this a special purpose board?

seriously I popped p420's into cheap hp desktops. 1 pci-e x16 and 3x pcie-1 (all 2.0) first.
 

mrkrad

Well-Known Member
Oct 13, 2012
1,244
52
48
Sort of. If you do not care to boot off the device and use [f8] to configure you can always throw in anything , but some people want to use [f8] configure or see status problem [ bad sas cable cause controller hang ]. You should always use ACU to create raid so you can create multiple lun's from sets of disk like intel ich (10 drives, 1 raid-10, 1 raid-10, 1 raid-5, 1 raid-6, 1 raid-0) - If you create raid using [f8] ORCA you may not be able to modify ..

HP like IBM, DELL likes to require check to to see if you are using their system (probably SLP style check!! easy!) but HP uses heavy encryption to keep you from extracting their code unlike ibm and dell who open in %temp%, then run, then exit.

I usually keep a dell, hp, and ibm/lenovo around to play. It just makes thing easier. Most workstation are designed to handle extended bios needs of multiple cards (4 dual port nic pxe/iscsi boot, 2 raid rom boot, etc) . Desktop boards tend to focus on fast bootup and have very large bios for overclock/tuning and UEFI - well that is a nightmare in itself. Good luck moving usb boot esxi stick off same IRQ as raid/nic with UEFI. If you do not believe it impacts performance, then log interrupts and watch the mess a usb mouse (especially super high resolution gaming mouse) can do to a hypervisor. I hope this is fixed someday, but then again I hope they make it so one PCI-e card does not crash both cpu and crash entire o/s at once with dual socket/dual iohub/dual bus - but this still is not the case. lol.

Proliant microserver is pretty cool - it can run p400/p410/p420 but requires special cooling needs due to poor design of ventilation. ML110 are very cheap too. The ML110 G7 is very cool, it can come with redundant power supply, standard comes with hot swap drive bays (same as microserver), and makes a decent desktop with PCI-E x16 for video, ECC memory support for reliability. I love to use these as my desktop..
 

TheBay

New Member
Feb 25, 2013
220
1
0
UK
I had a P212 running in one of my microservers for 2 years. Worked really well.
The microservers are also handy for flashing HP cards etc.
 

mrkrad

Well-Known Member
Oct 13, 2012
1,244
52
48
yeah I ran P420 with X25-M using SAAP 2.0 key to cache 7 15K SAS drives in raid-5. wow, it works, really well. I was expecting the system to complain but it just took the drive, setup the logical cache of 160gb and went to town, 90%+ cache hit rate. (read caching only). Very snappy with the rest of the flash back write cache and 15K SAS drives behind it.

I'm going to look at a larger drive or two drives in raid-1 with a piece carved off for boot.

To be honest, the LSI controllers are very unimpressive and with the worlds worst interface on the planet, compared to ORCA,ACU, etc.
 

TheBay

New Member
Feb 25, 2013
220
1
0
UK
Do you still have a microserver? Try my bios :)

I've never used LSI controllers in RAID mode, only as HBA's so have no basis to comment, though before I flashed them I have played with the IR option rom and didn't like the interface. Let alone their software.

ML110's make a great workstation, shame they are not as cheap new as they used to be :(
 
Last edited:

mrkrad

Well-Known Member
Oct 13, 2012
1,244
52
48
you the dude that made the OA 2.2 bios? ;)

Btw, what's the word on 16GB UDIMM ECC? I'd love to try the NL54 2.2ghz - the older one was just so damn slow that without 16GB ECC every boot, I'm afraid i'd rather go for the ml110 G7 which isn't much more money and has hot-swap bay the same, and decent intel xeon cpu's(E3-1270) and 32gb UDIMM ECC using the same chips the microserver uses to rock 16gb ECC iirc. Same direct fan out hot-swap with raid controller bays. x16, x4, x4, and x1 slots (x8 mechanical on the x4) but only pci-e 2.0 - So you will have to eat the x16 for the raid controller and perhaps run multiple gigabit on the x4 or single 10GBASE-T on each x4 - or two raid controllers on each x4 and 2 drives each?

ILO3 shared or separate port is also nicer than micro server. Gotta love the full ilo without a card. ML110 G7 also comes with option for 8 SFF ! Doesn't use the expensive SmartCarrier b/s cages that cost $40-60 for 2.5" you can rock the cheapie KIRF chinese hot swap cages quite fine!

Redundant hot-swap power supply option with 2 gold common slot 460w for those who can appreciate two power paths.

Dual crappy nic's (NC112) is a nice uplift too, since one nic just doesn't cut it these days, especially with ILO sharing eth0.

also ecc support with celeron g530 (!!) , pentium g630 or g840 , core i3-2120 and the xeon E3!! Apparently celeron are just slower pentium are just slower I3 are just slower xeon-e3 lol. Maybe less cache less mhz and less efficient.

I might have to replace my fire hazard ml110 g5 with one of these puppies! pop in a P420 raid card and x4 video card, and x4 10GBASE-T nic and pci-e x1 usb 3 ports.
 

TheBay

New Member
Feb 25, 2013
220
1
0
UK
you the dude that made the OA 2.2 bios? ;)

Btw, what's the word on 16GB UDIMM ECC? I'd love to try the NL54 2.2ghz - the older one was just so damn slow that without 16GB ECC every boot, I'm afraid i'd rather go for the ml110 G7 which isn't much more money and has hot-swap bay the same, and decent intel xeon cpu's(E3-1270) and 32gb UDIMM ECC using the same chips the microserver uses to rock 16gb ECC iirc. Same direct fan out hot-swap with raid controller bays. x16, x4, x4, and x1 slots (x8 mechanical on the x4) but only pci-e 2.0 - So you will have to eat the x16 for the raid controller and perhaps run multiple gigabit on the x4 or single 10GBASE-T on each x4 - or two raid controllers on each x4 and 2 drives each?

ILO3 shared or separate port is also nicer than micro server. Gotta love the full ilo without a card. ML110 G7 also comes with option for 8 SFF ! Doesn't use the expensive SmartCarrier b/s cages that cost $40-60 for 2.5" you can rock the cheapie KIRF chinese hot swap cages quite fine!

Redundant hot-swap power supply option with 2 gold common slot 460w for those who can appreciate two power paths.

Dual crappy nic's (NC112) is a nice uplift too, since one nic just doesn't cut it these days, especially with ILO sharing eth0.

also ecc support with celeron g530 (!!) , pentium g630 or g840 , core i3-2120 and the xeon E3!! Apparently celeron are just slower pentium are just slower I3 are just slower xeon-e3 lol. Maybe less cache less mhz and less efficient.

I might have to replace my fire hazard ml110 g5 with one of these puppies! pop in a P420 raid card and x4 video card, and x4 10GBASE-T nic and pci-e x1 usb 3 ports.
Not sure what the oa 2.2 bios is so guess that wasn't me lol (is this slp/slic?), there is a huge price difference in the UK for the microserver and ml110, the microserver is £125 after cashback!

Yeah the SM boards work with ecc on the lowly celerons :) even though intel don't document it.
16gb single UDIMM?
 

mrkrad

Well-Known Member
Oct 13, 2012
1,244
52
48
8gb UDIMM. but honestly i'd go for an older g6/g7 HP or dell poweredge (nehalem/westmere). Until Haswell which will double nehlaem/westmere speed, it is a waste to spend your money elsewhere.

64gb to SQL server means no read-ahead, just tempdb and log file writes need to be fast. I can get 8GB RDIMM ECC at $55/each new made in the USA, so call me old school, but a quad core e5620 crapbox full of 8 ssd is far more valuable than the intermediate ivy/sandy bridge that costs more and doesn't yield the performance gain for $$$. DL380 gen8? pssht. waste.

The NL can have OA2.2 which is slic for 2012 server, you can buy it that way or flash it. of course that is for convenience (not having to deal with activation) but you should always properly license your machines.
 

dba

Moderator
Feb 20, 2012
1,477
184
63
San Francisco Bay Area, California, USA
I used that same machine from Patrick for the LSI 9202-16e and 9207-8e reviews, so the Intel isn't trying to lock out non-Intel cards.

The HP P420/1GB card also failed to work in a Dell c6145, but it worked just fine in an HP DL160 G6, so I know that it's not broken.

By the way, mrkrad was correct: The HP P420 runs extremely hot. The DL1260 has two fans aimed at the PCIe area of the motherboard, and still the heatsink on the RAID card was too hot to touch for more than an instant. The PMC Sierra-based card was fast - 3,349MB/Second RAID0 performance with eight 128GB Samsung 840 pro SSD drives, saturating the PCIe2 bus. I wish the Intel E5 machine had worked so that we could get PCIe3 benchmarks out of the card.

Does a LSI card work, or has Intel crippled their latest E5 and E3 boards.
Someone with a 1200 'R' board, LSI branded card no work, on exact same non 'R' board it does

Can you get the RMS25kb080 to work on it, then flash with LSI FW and try on another Mobo ?
 
Last edited:

dba

Moderator
Feb 20, 2012
1,477
184
63
San Francisco Bay Area, California, USA
Let's see some benchmarks DBA :)

For the price are you happy?
Given its speed, and the price I paid, the HP P420 is an absolutely insane deal for any owner of an HP G8 server. For HP G7 owners, it's very fast and very reasonably priced but needs better cooling. For non-HP, I've had nothing but heartache so far.

That said, I haven't given up yet. I don't have a HP G8 machine but I do have a loaner Asus Xeon E5 box from Patrick. If I can get the P420 to work on some non-HP machine then I plan to write up a main page review. If it ends up being HP-only then I'll post the full results to the forums. It's a lower priority, so it could be a month or more. Thanks for the heads-up earlier - I will definitely try out the advanced features - especialy SSD caching of arrays.

Partial results:
Server is DL160 G7 with a single x5570 CPU and 2 GB RAM running Windows 2012.
RAID card is HP P420/1GB FBWC
Drives are 8 Samsung 840 Pro 128GB
Volume is RAID0 with 256kb stripe size. All cache allocated to reads for read tests, 80% write/20% read for read/write tests:

1MB random reads, QD=32, 1GB data size (all cache): 3,349MB/S
1MB random reads, QD-32, 6GB data size (too big to fit into the cache): 3,274MB/S
Basically, the card is saturating the PCIe2 bus. It would take a PCIe3 server to really see what it's capable of.

4kb random reads, QD=32, 1GB data size: 119,709 IOPS
IOPS is far lower than the best LSI FastPath results, but still pretty darn high
 
Last edited: