I have a HP DL360 G6 with an onboard p410i controller and have some problems with the performance of exactly the drives you're thinking about using - HGST Travelstar 7K1000. The server runs fine using the 300GB SAS drives but using SATA drives (the below benchmarks apply to the WD 750GB scorpio black (WD750BPKT) i get low transfer rates compared to the p410 add-in card.
Was the array built before or after the fw flash?
I'm doing some testing/rebuilding of the storage in a g7 with a p410i 1g.
It was originally running raid 6 using a trial key.
Sent to the colo and the trial key forgotten about until it returned to the mothership.
I read - in this thread, I think, about the additional raid levels being unlocked with a fw flash, so it seemed like a no brainer to go ahead and flash it.
I did, then removed the sas drives and put in 5 Samsung Pro 1 G drives in raid 5.
Format started uneventfully, I exited the ACU and let it continue to build.
This is a vsphere host - 5.0 at that time.
I went and did some other things, came back ran a quick benchmark and saw result around the throughput limit for that card when using parity.
I wanted to double check the cache setting, and probably tweak it a bit based on my expected work loads, so back to ACU.
Everything looked fine, until I drilled down into the array and it indicated that parity had failed to build.
I'm not a storage guy, so I had to think about that for a moment.
Yeah - it tracked with what I was seeing - raid 5 before parity is built is more or less raid 0, and since parity was never built in the first place, the array would not go into degraded mode since there was nothing to rebuild - sort of.
At this point, I had already flashed to the current firmware, but I was also not seeing the raid 6 option.
I wasn't planning on running raid 6 anyway due to it's poor write performance, but it was sort of a miner's canary that told me something was up.
I deleted the trial key that had expired 3 years previously and rebooted - no joy.
Shrugged and moved back off the 1 vm I had migrated to it, deleted the array and rebuilt it.
Oh, and just to confuse the issue - possibly because it was 2am - I also updated from esxi 5.0 generic to 5.5 HP oem to make sure I hadn't missed and drivers or created a driver situation that was jacked up but that I wouldn't see.
NOW it showed that it was built properly.
Also, I now started to see the storage controller in esxi in the gui, where previously I had not.
It may have been exposed at the command line level - but I wasn't the point of looking there since a rebuild is quick and my esxi cli skills are minimal.
This is kind of a long way around to suggesting you try a simple rebuild if the array was initially configured on previous firmware, and also to double check the drivers in use - low level firmware drivers do take input from OS drivers, so an old/incorrect/broken driver can impact the way an array is built or is accessed.
And check the array in ACU.
I did also look at it via various other utilities and none of them indicated an error.
For that matter, it was odd that ACU wasn't screaming on the main page - failure to initialize parity is a pretty serious condition - so do make sure you drill down into the array and verify it doesn't show something off there.
I did see specific errors in the Insight diagnostics that run from the 8.whatever image that's used for new server setup, indicating hot removal of multiple drives.
I did move a drive initially after power up, but well before post and this was before I had booted to ACU to create the array initially - also I know I only moved one, so I think that's unrelated.