Help: SuperMicro 721TQ chassis and Hard drive temperatures

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

nthu9280

Well-Known Member
Feb 3, 2016
1,629
501
113
San Antonio, TX
Greetings all!
I have a X10SDV-8C-TLN4F Board and 4 x 6 TB SAS drives in CSE-721TQ chassis and thermals are making me uncomfortable. When I set the Fan profile to Standard and Optimal, the air flow doesn't appear to be sufficient with the provided 120mm rear fan and the drive temps are hitting 58 * C. I put a 60mm fan on the passive heat sink and the CPU temp seem to be ok at <50 * C. Not sure if that is contributing the airflow alteration. The one thing I can think of is to put some obstruction to the front grill to make the airflow pass thru the drive bays. I don't have a way to measure the LSI 9207-8i HBA & no telling what that is running at. Has anyone experienced this and how did you address it.

Thanks in advance!
 
Last edited:

sko

Active Member
Jun 11, 2021
253
131
43
CSE-721TQ. In the title. Now updated the post also. thanks
Oh dear, it was late. Sorry - I really wasn't able to hold the information from the title while reading the post...

We are running a NAS with the same case and A2SDi-4C-HLN4F board (atom C3558) but with SATA drives, as the PCIe slot holds a X520-da2 NIC (AOC-STGN-i2S).
Currently that system holds 2x16TB HDD + 2x 1TB SSD (previously 4x8TB and before that 4x6TB) and is used for 3rd stage backups, so it pulls/receives zfs incremental snapshots every few minutes.
Hard drive temperatures were never an issue in that NAS and are usually well below 40°C at idle (currently 36°) and below 50°C under high load. during resilvering a few weeks ago when switching from 8TB to the 16TB drives, the new drives hit 44°C, the old ones have 48°C logged as max temperature.

Fan speed is set to full speed - that fan isn't loud and I really can't make out any difference between the "Optimal" and "Full" setting from ~2m distance where my desk is located. The (also not considerably 'loud') stack of 2 catalyst 3750X in the wall mounted network cabinet the NAS is standing on is much more audible.


We are also using almost the same Xeon D 1518 platform (X10SDV-TP8F board, respectively the SYS-1518D-FN8T) as router/gateways in our branches. They put out considerably more heat than that Atom (16W vs 35W TDP), but still well within reasonable bounds. CPU temperatures were never an issue with those systems even during warm summers. So I don't think that the CPU/SoC is putting out enough heat to be responsible for those high disk temperatures...


*what* drives exactly are you using?
We've been runinng HGST Ultrastar/WD DC HC320 and HC550 in that system without any problems. Those drives tend to stay at the colder side compared to e.g. seagate constellation ES and especially the crappy WD red pro we had in another NAS (those 4 drives were all RMAd during the first 2 years and also replaced with DC HC). (I'm mainly talking about SAS drives here - that NAS is the only system using SATA HDDs)
Given that those types of NAS chassis are designed for low-power NAS systems (the Xeon D is pretty overkill for a NAS IMHO...) and not high-end storage servers, I'd always go for drives with lower power specs or ones that are known to be relatively energy-efficient and hence not putting out much heat. Also that HBA might induce a considerable amount of heat - is SAS really necessary for your use case? (especially given the backplane only supports SAS2)
 

i386

Well-Known Member
Mar 18, 2016
4,263
1,558
113
34
Germany
Has anyone experienced this and how did you address it.
I had (recently) a similar experience but with a bigger chassis (846): two hdds were a lot hotter than same model hdds.
I shut the server down, looked at the cabling behind the backplane because my first thought was that an cable obstructs one of the the openings. But nothing problematic there.
So I pulled the two problematic hdds one after the other out and saw a lot of dust on them (with a lot I mean a big pieces that felt pretty "solid", the server rack is in my home office and not a datacenter with controlled environment). Cleaned , put them back in and started the server. To my surprise the hdds were now a lot cooler and never warmer than 5°c than the surrounding hdds.
In my case I think the dust acted like an insulator, keeping the heat in the hdd and obstructed the airflow

tldr; dust can lead to "overheated" hdds
 

nthu9280

Well-Known Member
Feb 3, 2016
1,629
501
113
San Antonio, TX
Oh dear, it was late. Sorry - I really wasn't able to hold the information from the title while reading the post...
No worries. Thanks for your time and the detailed response. These are HUS726060AL5215 HGST 6TB SAS drives. I adjusted rear fan speed a notch and added a filter near the front grill to add some resistance with intent of forcing more air over the drives. Idling at 43-45 now. The drive Trip is 85 Need to test with some heavy IO & CPU load.

Code:
for drv in $(lsscsi -b | awk '{print $2}'); do (smartctl -A $drv | grep -i 'Temperature'); echo $drv ; done

Current Drive Temperature:     45 C
Drive Trip Temperature:        85 C
/dev/sda

<snip>


If anyone is looking for IPMItool fan control raw commands for this board
# X10SDV-8C-TLN4F
# CPU zone 30% & Peripheral zone 60%
ipmitool raw 0x30 0x70 0x66 0x01 0x00 0x1E
ipmitool raw 0x30 0x70 0x66 0x01 0x01 0x3C

The intent is to make this box to be always on AIO box with both storage, VM & Container loads for home use. I'm playing with Proxmox 8 at the moment. The SAS controller is planned to be a pass thru for a storage VM such as TrueNAS Core / Napp-IT. While I could do ZFS in the proxmox, thinking of keep the proxmox pretty vanilla and separate out the storage etc for modularity. I'm all ears for recommended / best practices for future upgrades, backups and ease troubleshooting.

Supermicro actually sells a configured system with 16c/32t Xeon D1587 with this chassis so I thought my config should be much less TDP for the chassis to handle.

tldr; dust can lead to "overheated" hdds
Thanks for the heads up. I'll keep an eye out for that especially the stuff I have in the garage which for the most part are turned-off.
 

sko

Active Member
Jun 11, 2021
253
131
43
These are HUS726060AL5215 HGST 6TB SAS drives.
We've used the 8TB SATA variant of those drives, so those shouldn't be the problem.

Instead of layering the job of a simple NAS in a VM (horrible idea performance-wise...), I'd just use vanilla FreeBSD bare metal and run anything else in jails or (if necessary) bhyve VMs. This will make much better use of RAM, improving overall ZFS pool(s) performance and greatly simplify management overhead.

Also, if you are not fully commited to those drives, I'd recommend using only 2 spinning rust disks and add 2 SSDs as a 'special' device mirror for metadata and small files (upper size limit depends on the HDD and SSD sizes you'll be using). This will massively increase performance for anything involving metadata (which is almost exclusively random I/O).
This is how we run that NAS now, because with a purely spinny-disk-based ZFS pool and hundreds(thousands?) of snapshots, the send|receive jobs were painfully slow. The simple task of listing snapshots of a single dataset often took up to several minutes and is performed dozens of times over the day with incremental send|recv jobs for backup.
File opreations e.g. on file servers also involve a lot of metadata crawling, so those applications will also benefit from a special device.

If I had to configure/repurpose that system as an allround-homeserver, I'd go for 2x SATA HDD + 2x SATA SSD for the storage/fileserver pool and use the freed PCIe slot for a quad M.2 carrier card for the OS (2 drives) and another pool on NVMe that holds jails and VMs.
 
Last edited:
  • Like
Reactions: ColdCanuck

rtech

Active Member
Jun 2, 2021
314
113
43
You can passthrough disk partition directly to VM with native performance

Config from virt-manager
<disk type="block" device="disk">
<driver name="qemu" type="raw" cache="none"/>
<source dev="/dev/disk/by-id/scsi-1ATA_WDC_WD10JPVX-60JC3T0_WD-WXM1E14JRXM7-part1"/>
<backingStore/>
<target dev="vdb" bus="virtio"/>
<address type="pci" domain="0x0000" bus="0x08" slot="0x00" function="0x0"/>
</disk>