Making Supermicro SC743 more quiet

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

tcpluess

Member
Jan 22, 2024
36
2
8
Hi all

I got my new homelab server. As I don't have a rack and the space for it, I got a tower. Also 2 PSUs don't make sense in my setup so I went with the SC743, wich has only one PSU. But this is fine for me.
I now have everything set up nicely, I even managed to find a 2.5" hot swap bay that mounts into the front panel, which is very good. So I can not install in total 16 disks.
Now I have the problem that even when I set the fan speed to "optimal" it is still a bit noisy. Does someone have experience in making those towers more quiet?

I see that I have internally 4 removable fans of 80mm each as well as one large 120mm fan on the CPU and one large fan (also 120mm? not sure) in the PSU.
I also don't know what happens if I just remove a couple of the small fans. I believe the BIOS or so will complain about a fan failure. Or is it worth to try?
 

sko

Active Member
Jun 11, 2021
249
131
43
There are some threads here regarding controlling the fans via IPMI on supermicro systems. The linked scripts vary wildly in regards to feedback loops, target temperatures etc, but the key information is the raw BMC command to adjust fan speed:
ipmitool raw 0x30 0x70 0x66 0x01 0x0[01] <dutycycle%>
The duty cycle can simply be given in decimal; the BMC will translate it.

The second to last hex value is often called the "fan zone" - there is only 00 or 01 and you have to set both. Most threads here in the forums talk about "CPU zone" and "disk zone", but in no system I've seen so far there is anything like that - disks are in the front, CPUs in the back and all fans are lined up inbetween. 00/01 often doesn't even correlate to the left/right group of fans or the front/back, high/low or odd/even pin headers on the board.
You can try if you can match 00/01 to anything useful, or simply issue a single ipmitool raw [...] 0x00 [...] && ipmitool raw [...] 0x01 [...] and set all to the same speed.

You can easily build a shellscript for your needs or adapt one of the available scripts (a search on github also gives several results).
Most scripts aim for some fixed temperatures; I went for a more universal solution by reading the tjmax, setting a sane baseline temp (i.e. reasonable temperature on idle) and interpolate between those two values.


Regarding the fans:
Supermicro usually has different variants available for most systems. Try to find out what part number was used in the lower-spec/single-CPU systems that used that case.
High-rpm/high throughput fans usually have stronger magnets and tend to vibrate on lower rpm, so using lower-rpm variants makes sense when using a script to control fan speed. (of course only if cooling requirements can still be met by those fans!)
 
  • Like
Reactions: tcpluess

i386

Well-Known Member
Mar 18, 2016
4,250
1,548
113
34
Germany
The 743 & 745 come with different fans depending on skus; there are "sq" chassis with 2,8k rpm fans and also x11/purely platform enabled chassis with fans that spin up to 11k rpm.

So the question is what sku/fans do you have? :D
 

tcpluess

Member
Jan 22, 2024
36
2
8
There are some threads here regarding controlling the fans via IPMI on supermicro systems. The linked scripts vary wildly in regards to feedback loops, target temperatures etc, but the key information is the raw BMC command to adjust fan speed:
ipmitool raw 0x30 0x70 0x66 0x01 0x0[01] <dutycycle%>
The duty cycle can simply be given in decimal; the BMC will translate it.

The second to last hex value is often called the "fan zone" - there is only 00 or 01 and you have to set both. Most threads here in the forums talk about "CPU zone" and "disk zone", but in no system I've seen so far there is anything like that - disks are in the front, CPUs in the back and all fans are lined up inbetween. 00/01 often doesn't even correlate to the left/right group of fans or the front/back, high/low or odd/even pin headers on the board.
You can try if you can match 00/01 to anything useful, or simply issue a single ipmitool raw [...] 0x00 [...] && ipmitool raw [...] 0x01 [...] and set all to the same speed.

You can easily build a shellscript for your needs or adapt one of the available scripts (a search on github also gives several results).
Most scripts aim for some fixed temperatures; I went for a more universal solution by reading the tjmax, setting a sane baseline temp (i.e. reasonable temperature on idle) and interpolate between those two values.


Regarding the fans:
Supermicro usually has different variants available for most systems. Try to find out what part number was used in the lower-spec/single-CPU systems that used that case.
High-rpm/high throughput fans usually have stronger magnets and tend to vibrate on lower rpm, so using lower-rpm variants makes sense when using a script to control fan speed. (of course only if cooling requirements can still be met by those fans!)
phantastic. I will try this. How do people find out these hex codes?
and did I understand correctly, you run your script at regular intervals and then adjust the fan speed automatically, like in a sort of control loop?

The 743 & 745 come with different fans depending on skus; there are "sq" chassis with 2,8k rpm fans and also x11/purely platform enabled chassis with fans that spin up to 11k rpm.

So the question is what sku/fans do you have? :D

good question, I don't know from heart! my chassis is "743AC-668B". As it is 668 Watts only, I don't think I have the 11k fans.
 

i386

Well-Known Member
Mar 18, 2016
4,250
1,548
113
34
Germany
good question, I don't know from heart! my chassis is "743AC-668B". As it is 668 Watts only, I don't think I have the 11k fans.
If the fans were not replaced they are the 5k rpm versions ( FAN-0074L4 ). If memory serves me right these fans have an annoying whine to them...
You could replace them with FAN-0104L4 (80mm, max 2.8k rpm) from the sq skus. I have 3 of them in my 745 in the midwall and the 92mm 2.8rpm fan in the rear for cooling a 280w tdp threadripper system.
 

tcpluess

Member
Jan 22, 2024
36
2
8
If the fans were not replaced they are the 5k rpm versions ( FAN-0074L4 ). If memory serves me right these fans have an annoying whine to them...
You could replace them with FAN-0104L4 (80mm, max 2.8k rpm) from the sq skus. I have 3 of them in my 745 in the midwall and the 92mm 2.8rpm fan in the rear for cooling a 280w tdp threadripper system.
great. I will try this. Your system is quiet enough then to have it in the office or so?

also, if I try right now without replacing the fans (as I will need to order/find them somewhere), will the airflow still be acceptable when I use the 5k fans and reduce the speed to 2.8k?
 

sko

Active Member
Jun 11, 2021
249
131
43
How do people find out these hex codes?
by randomly searching the internet :p
TBH, no idea - best guess would be tcpdump'ing traffic from the supermicro smcipmi tool or ipmiview and a remote BMC...

did I understand correctly, you run your script at regular intervals and then adjust the fan speed automatically, like in a sort of control loop?
The script runs as a (very basic) service. The rc file simply uses daemon, which creates the pidfile and calls a script that only contains a crude while (true) loop, which runs the actual fancontrol script and then sleeps for 2 seconds.
I wanted to keep the fancontrol script as generic as possible (no main loop or service related stuff), so it can be easily adapted or used by other programs/services.

I dumped my script to termbin:

It still includes a lot of 'echo' statements to make it observeable/debuggable.
Please note while it is a portable 'pure' bourne shell script, it is inteded to run on FreeBSD with Intel CPUs and the coretemp driver loaded, as it gets the tjmax from the dev.cpu.N.coretemp.tjmax and temperature deltas from dev.cpu.N.coretemp.delta sysctls. This (and maybe the path to ipmitool) would have to be adjusted for other OS/CPU. Also no guarantee GNU variants of the standard tools used (awk, bc) behave the same as the UNIX/POSIX versions on BSD.

There used to be another correction for disk temperatures, but since I completely switched to SSDs in all servers a few years ago and even the SAS SSDs never reach any concerning temperature levels, I removed that (practically unused) code...

I've been using that script for ~3-4 years on various systems now. Apart from the cpubias (set to 98 on systems I want to be a bit more quiet) and the baselinetemp (e.g. for L variant Xeons, which have a lower tjmax and hence require a bit more cooling) I never needed to touch any other value.
It might not be perfect as it tends to vary the fanspeed a bit aggressively on high loads (package builds), but it does the job reasonably well even on the server (SYS-6029U-TR4T /w 2x Xeon E5-2660v4) that sits in an open frame rack in my office at home.
I only have rackmount systems though, so YMMV!
 
Last edited:

i386

Well-Known Member
Mar 18, 2016
4,250
1,548
113
34
Germany
great. I will try this. Your system is quiet enough then to have it in the office or so?
It's still audible, but for me it's acceptable in my home office (so far nobody complained in the teams meetings)
will the airflow still be acceptable when I use the 5k fans and reduce the speed to 2.8k?
For my use cases definitely.
Also supermicro advertises/uses the sq version of the 743 as tested/verified chassis for threadripper pro systems:
1712741587364.png
Source: M12SWA-TF | Motherboards | Super Micro Computer, Inc.
 

nabsltd

Well-Known Member
Jan 26, 2022
431
293
63
If the fans were not replaced they are the 5k rpm versions ( FAN-0074L4 ). If memory serves me right these fans have an annoying whine to them...
I am using these fans in my 743, and they are quite nice. They do have a bit of a whine when you hit about 70-80% duty, but it goes away at 100%, and then they are just smooth and move a ton of air. Admittedly, I have full control over them since I am not using a SM motherboard with IPMI, and they never get to 80% in real-world usage.

Part of the issue is almost certainly that since the Supermicro motherboard only has 2 zones, if you just plug the front fans into random headers, they are running at CPU duty cycle, since most of the headers on SM motherboards are in the CPU zone. Since @tcpluess mentioned that the CPU had its own fan, that means it's OK to not have the case fans running fast enough to try to cool the CPU.

One or more PWM splitters connected to the FANA/B headers and then connected to the fan wall, and setting the IPMI to "Optimal" speed will run the fans at a quiet 2K RPM.
 

tcpluess

Member
Jan 22, 2024
36
2
8
I am back again ;-)
So, I just tested the IPMI raw command. It works perfectly. And indeed I do have the 5k fans which seem to be a bit of the louder ones.
Anyways, using the IPMI command I experimentally set the fan speed to 10% and this was close to perfect. I also installed the "lm-sensors" package to look at the temperature sensors and found the hottest temperature at about 43 °C, which I would say is perfectly fine.
I have not a ton of CPU load on that system, as it currently only serves as my ZFS NAS.

I have two SAS HBAs, one for the 2.5" hot swap bay and one for the 3.5" hot swap bay, so I can put in total 16 disks.
Also I installed a Intel X520 network card which I will use to set up my 10G fibre network.

I have not measured power consumption, but I think it is not too much and I can afford lowering the fan speeds a bit.

Currently I am trying to figure out how I can arrange my SSDs and HDDs such that I get the best ZFS performance (will probably make another thread for this).

Thanks for the IPMI command and thanks for uploading the fan control script!
 

sko

Active Member
Jun 11, 2021
249
131
43
I have not measured power consumption,
This can also be monitored via ipmitool:
ipmitool dcmi power reading 1_min
That's also how I monitor all my/our hosts from zabbix.

Currently I am trying to figure out how I can arrange my SSDs and HDDs such that I get the best ZFS performance (will probably make another thread for this).
To get somewhat 'modern' performance from spinning rust, add a mirror of SSDs as "special" device to a pool of HDDs. This will offload the random-IO-heavy metadata and small files from the slow disks and especially improves metadata-operations like ZFSs internal housekeeping and snapshot administration (listing hundreds/thousands of snapshots can easily choke a HDD pool for several minutes...)

For small pools always use mirrors - they offer the most flexibility (also for upgrades or changes in pool geometry). Small RaidZ vdevs loose a lot of space to padding and are comparably (very) slow, especially when resilvering, which can easily take several days for large providers.

Maybe read through some of the articles about ZFS basics on klarasystems.com. I can also *highly* recommend both ZFS books from Michael W. Lucas and Allan Jude. The first one also goes into detail about choosing the right pool layout / vdevs.
 
  • Like
Reactions: tcpluess

tcpluess

Member
Jan 22, 2024
36
2
8
This can also be monitored via ipmitool:
ipmitool dcmi power reading 1_min
That's also how I monitor all my/our hosts from zabbix.


To get somewhat 'modern' performance from spinning rust, add a mirror of SSDs as "special" device to a pool of HDDs. This will offload the random-IO-heavy metadata and small files from the slow disks and especially improves metadata-operations like ZFSs internal housekeeping and snapshot administration (listing hundreds/thousands of snapshots can easily choke a HDD pool for several minutes...)

For small pools always use mirrors - they offer the most flexibility (also for upgrades or changes in pool geometry). Small RaidZ vdevs loose a lot of space to padding and are comparably (very) slow, especially when resilvering, which can easily take several days for large providers.

Maybe read through some of the articles about ZFS basics on klarasystems.com. I can also *highly* recommend both ZFS books from Michael W. Lucas and Allan Jude. The first one also goes into detail about choosing the right pool layout / vdevs.
Yes I already operate at work some Proxmox nodes with ZFS, so I already have some experience.
But for my home setup I can experiment with more crazy setups. Like buy a lot of used SSDs frrom *bay and make an SSD only pool and stuff like that. I have indeed made good experiences with the special metadata device.
I wonder about the following:

I have two HDDs in mirror. 14 TB Ultrastar. Further, I have a couple of SSDs, like HGST S842 and HUSSL4040.
I will make one special device of mirrored SSDs, and then I will create one dataset that has special_small_blocks = <recordsize>. With this I can make all data in this particular dataset to live only on the SSDs.
For example, I will put my VM and LXC storage in this dataset.
And other data such as movies and music I will put in another dataset with special_small_blocks = 0 such that these data reside only on the HDDs.

I wonder about the following: When I have my VMs set up as above mentioned, will the ZIL be on the SSDs as well, or will I need a SLOG? cause I want maximum performance that is possible with my hardware. On the other hand I feel like it is a bit overkill to use a 400 GB SSD as SLOG.

I made some experiments with the S842 SSD as SLOG, and I got roughly 4500 fsyncs / second (measured with Proxmox pveperf), which I find a bit low. Can I improve somehow?
 

sko

Active Member
Jun 11, 2021
249
131
43
I will make one special device of mirrored SSDs, and then I will create one dataset that has special_small_blocks = <recordsize>. With this I can make all data in this particular dataset to live only on the SSDs.
Just make sure your special device will *NEVER* run full - the pool performance will take a total nosedive, comparable to a pool running at >90% used capacity.
I'd simply try only offloading metadata to the special device first. This usually already gives the biggest performance impact; then one can (carefully) dial up the special_small_blocks value - a good rule of thumb is setting this to the blocksize of the HDD providers, so anything smaller than a full block will go to the faster special device.
You can also use zdb to analyze the block size distribution on your pool and increase or decrease special_small_blocks to get a bit better performance or prevent the special device from filling up too fast.

I wonder about the following: When I have my VMs set up as above mentioned, will the ZIL be on the SSDs as well, or will I need a SLOG?
Put VMs on SSD pools. period.
SLOG devices are mainly (only) useful for heavy database workloads (on disk, which should be avoided by in-memory-caching anyways), but nowadays those should simply reside on NVMe.
I have dedicated NVME pools for VMs/jails/databases or e.g. package building with poudriere. Even before switching everywhere to SSDs, spinning rust was only used as a mass grave for 'colder' data or backups.
 

tcpluess

Member
Jan 22, 2024
36
2
8
Just make sure your special device will *NEVER* run full - the pool performance will take a total nosedive, comparable to a pool running at >90% used capacity.
I'd simply try only offloading metadata to the special device first. This usually already gives the biggest performance impact; then one can (carefully) dial up the special_small_blocks value - a good rule of thumb is setting this to the blocksize of the HDD providers, so anything smaller than a full block will go to the faster special device.
You can also use zdb to analyze the block size distribution on your pool and increase or decrease special_small_blocks to get a bit better performance or prevent the special device from filling up too fast.


Put VMs on SSD pools. period.
SLOG devices are mainly (only) useful for heavy database workloads (on disk, which should be avoided by in-memory-caching anyways), but nowadays those should simply reside on NVMe.
I have dedicated NVME pools for VMs/jails/databases or e.g. package building with poudriere. Even before switching everywhere to SSDs, spinning rust was only used as a mass grave for 'colder' data or backups.
Yes this I know. However I have enough SSDs for all my data so it should be no problem at the moment.
However I am not sure if I really want to make two separate pools. Because then the RAM for the ARC would be shared between the two, so therefore in the end each pool has only half of the RAM available for ARC, isn't it?
So my initial thought was to have just one large pool.

I always read that the SLOG is important for VMs, too. Is that wrong? so with a special device and HDDs I am basically fine for VM usage?

Unfortunately I don't have NVMEs available at the moment. Data centre SSDs yes, but no NVMEs. It seems they are a bit harder to find and more expensive. The Samsung 970/980/990 are cheap and high capacity, but I read that they have not good performance when used with ZFS.
 

nabsltd

Well-Known Member
Jan 26, 2022
431
293
63
And indeed I do have the 5k fans which seem to be a bit of the louder ones.
The FAN-0074L4 are actually one of the quietest fans used in Supermicro chassis. With 68CFM and only 45dBA at full speed, they absolutely crush the stock fans in the SC826, SC846, and SC847 chassis, which are 72CFM but 54dBA at 7000RPM.

Here's all the 80mm fans that are found in Supermicro chassis:

SM Part #CarrierMakeModelModel #RPMdBACFM
FAN-0104L4GreenSanyo DenkiSan Ace9S0812P4F05128002432.9
FAN-0044L4NidecUltraFloT80T12MS1A7-5737503648.5
FAN-0074L4GreenSanyo DenkiSan Ace9G0812P1F0350004568.3
FAN-0082L4RearSanyo DenkiSan Ace9G0812P1F0950004568.3
FAN-0062L4FlatSanyo DenkiSan Ace109P0812P2C03150004759.1
FAN-0125L4FlatSanyo DenkiSan Ace9GA0812P2M003167004759.6
FAN-0094L4FlatSanyo DenkiSan Ace9G0812P1G0963005190.3
FAN-0095L4CoverSanyo DenkiSan Ace9G0812P1G0963005190.3
FAN-0126L4NotchNidecUltraFloV80E12BHA5-5770005472.5
FAN-0127L4CoverNidecUltraFloV80E12BHA5-5770005472.5
FAN-0116L4RearNidecUltraFloV80E12BS1A5-5782005885.5
FAN-0118L4NotchNidecUltraFloV80E12BGA5-57950061100.0
 

Koop

Active Member
Jan 24, 2024
174
85
28
I was gonna troll and say you should drop in the FAN-0127L4s and that they'd be much quieter but @nabsltd is too helpful
 

Sjhwilkes

New Member
Oct 17, 2020
28
2
3
Just installed 3 X Noctua NF-A8's in my 2U box, now the bearing noise from the fans in the PSUs are the only annoying part. I'm running the 65W low power Xeons and no spinning disks, total power use about 160W - so I figure the original airflow was well in excess of my needs. Have a PWS-9209P-SQ on the way from eBay - hoping that will reduce the PSU noise. Need to figure out how to stop the IPMI moaning about the fans only running at 1600.

Edit to add - 920 SQ PSU is practically silent. Like OMG my life is transformed and I can leave lab on all the time now. First eBay example was kaput, second one is not. At 130W system is almost inaudible. We'll see as I turn up some more VMs (and put some more RAM back in, took it down to 256G as I had a bad stick or sticks)
 
Last edited: