This is the max. ambient temperature. The asic/cpu can get up to 80-90°C before sensors and alarms goes off.
View attachment 16500
Our RAID controllers address virtually all direct-attached storage (DAS) environments – SATA or SAS, hard drives or solid state drives (SSDs)
www.broadcom.com
65°C is cool, my perc h330 has been running at 75-80°C for years. That temp is perfectly normal for sas cards for constant operation. If you turn off your pc daily than it's a different story and it would be better to cut 10 or more degrees off the max
Thanks for this. Between these two posts, this was the answer I came here looking for.
I had seen the 55°C figure bandied about, and thought it sounded awfully low for a silicon chip, but I didn't know for sure. Makes perfect sense that it is ambient.
I currently have two active systems with LSI/Avago/Broadcom/Whatever HBA's (both currently happen to be SAS9305-24i cards, through I have a currently non-booting remote backup server box with two SAS9300-8i's due to an errant "zpool upgrade" and a "spare parts" box full of various others I no longer use, including several SAS9300-8i's a SAS9300-16i, a SAS9400-16i and several old SAS9200 series cards)
I had always been somewhat concerned at just how hot these things get to the touch. Even after over a decade of using these things. (I started out with a couple of IBM M1015 cards cross-flashed to HBA's back in 2014) I only just heard of the "storcli" program last week
The primary box is my all-in-wonder VM/Container host and NAS server utilizing a Supermicro H12SSL-NT board with an EPYC 7543. It is in a custom modded Supermicro SC-846 case with the direct BP846A backplane (no SAS expander). I took out the fan wall, and replaced it with three 3000rpm 120mm Noctua Industrial Fans, and custom crafted a piece of wood to block the airflow on top of them:
The 3000rpm Noctua fans provide lots of airflow at ~110CFM each at max speed, (though I have no way of measuring if it meets the 200 linear feet spec, I could probably divide the total airflow by the swept area, which gives me about 710 linear feet per minute, but that does not account for the fact that 110 cfm is an unrestricted specification, and is decidedly less when static pressure is factored in. I don't have the proper metrology device to measure what kind of airflow I am
actually seeing, though I figure that is plenty of safety margin.
Either way, this results in much quieter operation, which is important for a home setup, as the little high RPM 80mm screamers are audible two rooms over with all doors closed. Despite pushing more air, the 120mm fans are quieter due to their size. All else being equal (same airflow) generally larger fan = quieter. (Also, more fans at lower speeds are generally quieter than fewer fans at higher speeds)
In this configuration, the SAS9305-24i with 12x 16TB 7200rpm Seagate Exos x18 drives attached sits at ~80°C at idle in a 24°C room, which is slightly alarming to me, but the temperature warning LED has never gone off, so I guess I am OK.
I think the 6 semi-bulky SAS cables coming off of the 24i variant get in the way of direct airflow. I might mount another fan blowing at the HBA if my concern grows.
The second system uses my (very old but still surprisingly still usable) decommed server board (Supermicro X9DRI-F with dual Xeon E5-2697 V2's) in a pedestal workstation case (OG Phanteks Enthoo Pro) with a 200mm fan upfront blowing air across the hard drives and into the case.
In this case, the Medusas hair of SAS cables block airflow to the HBA even more, and there is less airflow, so I added a fan blowing down on the HBA (and NIC, both sandwiched between the old Titan GPU and the 16x four way NVMe riser card) to help remediate it. It doesn't blow in the same direction as the heatsink fins, but at least it helps a little bit.
In this case and configuration the same HBA runs significantly cooler, only registering about 60°C, but the office it is in is a degree or two cooler, and the load on the workstation is lower, with fewer drives attached (only 6x 4TB HGST SAS drives, the rest of the SAS connectors go to various 3.5" and 2.5" hot swap bays for imaging and other maintenance purposes)
Interestingly, I have never seen more than a 1°C-2°C difference in SAS HBA temperature (as reported by storcli) between idle and full load on any system. It sounds to me like LSI/Avago/Broadcom/Whatever have little to no power management in their designs. I'm guessing they just run full power all the time whether full load or idle, which is kind of wasteful :/
My guess would be that the tiny difference in temperature I see between idle and load is a result not of the HBA silicon itself, but rather the heat coming off other devices that also see more load when the drive load goes up (hard drives themselves, CPU's calculating ZFS checksums, etc.)
The hottest LSI HBA's I have ever used despite its massive heatsink (judging by touch, this was before I discovered "storcli" and I haven't had it installed to test it since then) has been the SAS9300-16i. I presume this is because this board was simply two SAS9300-8i chips glued together on the same card using a PCIe (PLX) switch, and the PCIe switch uses a non insignificant amount of power itself, in addition to there being two SAS controller chips instead of just one. Notably this is also the only HBA I have ever used that required it's own supplemental PCIe power cable to power it. Apparently the power the slot could provide was insufficient. The temperature of this heatsink definitely was above the pain threashold when I touched it. I'm actually surprised I didn't get a blister.
To address this I ziptied a 92mm slim Noctua fan directly to the heatsink.
This worked very well to control the temperature of the thing, but it did block the abutting PCIe slot, which is why I eventually replaced it, as I needed that slot. Starting with the SAS9305-16i, they had a new cooler chip with all 16 lanes on the same chip, which ran much cooler, so that is w3hat I went shopping for, but when I did I found a decommed SAS9400-16i for almost the same price, so I just bought that instead.
It worked well, and I used it in this workstation up until just a few months ago, when I wanted to add the six hard drives to create a ZFS send/recv backup pool in the workstation (to do backups off of my main NAS, until I could figure out and fix why my offsite backup server wasn't booting, and this was going to take a while as it is in a location I rarely have time to visit).
When adding the 6 hard drives, I ran into the issue that while I only needed 15 connections, 6 of them needed to be SAS (for the hard drives), and 9 of them needed to be SATA (for the hot swap bays). I initially considered pulling one of the old Intel or HP SAS expander cards I have in the spare parts bin for this, but I decided to rather than deal with a hodgepodge of adapters or expanders (I hate troubleshooting intermittent adapter connections) I noticed that the 9305-24i's which initially where rather pricy when I bought the one for my NAS are now quite affordable on the used server pull market, so I just got another one. This way I can have two breakout cables that are SAS (no backplane here, unlike the Supermicro case), and three breakout cables that are SATA, and make things cleaner.
I think the 9305-24i
might just run a little cooler than the 9400-16i did. This might have something to do with the fact that the 9400 series were the first to support NVMe drives using what they call Tri-Mode "SerDes" technology. Some part of that "SerDes" stuff must consume power and create heat even when not in use.
At first I thought it had some sort of integral PCIe (PLX) switch that handled this, but I ahve since learned that the Tri-Mode SerDes HBA's actually present NVMe drives to the host as SCSI devices, not as native PCIe devices, so there is probably something else going on here.
I haven't tested this myself yet though, I am just going off of what I have read.
I have also read that its not worth bothering with the NVMe capability of the SAS9400 and SAS9500 series HBA's, as they really neuter the performance of attached NVMe drives. The SAS9600 series - however - apparently work at near native NVMe speeds (or so I am told) but these are way too new to be in my hobbyists budget. Maybe I'll get one down the road when server pulls start showing up.
The 9400 and 9500 might be OK if you just need to access NVMe drives though, and don't need their high native PCIe performance.
Anyway, sorry for the TLDR brain dump. I started writing this to share my temperature experiences if anyone else was interested, and like always I went off the rails. I hope someone (other than a cursed thieving data mining AI language model) finds this post useful.