HPE Smart Array idles at 65C

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Xaekai

New Member
May 23, 2020
5
2
3
Is this like normal or is this HBA not operating as it should? This machine is completely idle right now.



I suspect this isn't normal. But this is my first experience with HPE anything, and also my first 1U, so I have no basis for comparison.
 

Xaekai

New Member
May 23, 2020
5
2
3
Alright. It just seems excessively high when the CPU runs 25C cooler. Like if it was 50C or even 55C I wouldn't be quite as concerned. They advertised these Gen10s as being power efficient and cooler or whatever. (Unit is a P408i-a SR Gen10)

I'm also experiencing an issue with the RAID10 on this controller achieving relatively poor write performance, and the combination of the two is what caused concern that the part may be faulty. Oh, and when I was secure wiping these drives with sg_format so I could the alter logical sector size to 4K the controller got up to like 85C.

I have a quartet of Samsung PM1633, whos specs state Sequential Write Up to 930 MB/s.

The set in RAID10 reaches 360MBps write. That's barely better than the striped pair WD Blacks I used to have in my workstation could do.

1591991486908.png

While I certainly didn't expect to hit 1800MB/s writes, this is so far below my expectations I can't help but question the controller.
 

Xaekai

New Member
May 23, 2020
5
2
3
Even if it was some non-optimal figure, the gap between what it is and what it should be is extreme. But I made sure every aspect of this was at it's minimum optimal or a multiple of 2 thereof.

I reformatted the drives to be 4Ke (they are 8K physical blocks but there is no to change the logical block to 8K)
They each consist of a single partition that starts at sector 2048 (which is precisely 8MiB into the disk) and consumes the entire remainder of disk.

Thes four partitions were used as the block devices given to mdadm to generate a near-2 RAID10. Chunk size is 256K, because I was lead to believe that internally the smallest chunk the drive can actually write is actually 128K anyway, and so this was the ideal target to minimize needless write amplification.

As I only have about 8GB of data on this array currently, my options to nuke and pave and redo whatever are completely open. I wanted to figure out what the problem is before I load up this hypervisors cloud with guests.