Power consumption for home NAS

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

madbrain

Active Member
Jan 5, 2019
212
44
28
I built a NAS with the following specs . The power consumption is about 100 to 102W idle according to my kill-a-watt.
I am wondering if there is anything I can do in order to reduce the idle power consumption.

Z170 chipset . Asus Z170-AR motherboard
Skylake 6600k CPU running at 4.4 GHz
NH-D14 cooler (dual fan)
Cooler Master HAF-XM case, with 4x200mm fans and 1x140mm fan
Raidmax 1200 PSU
32GB DDR4-3000 Patriot RAM, at 2400 MHz (unreliable beyond 2400 unfortunately!)
1 x Kingston 96GB SSD (SATA II) mounted to the back of the motherboard for the OS
5 x WD 10TB easystore, shucked in December
1 x LSI 9207-8i PCI-e 3.0 x8
1 x LSI 9207-4i4e PCI-e 3.0 x8
1 x Aquantia AQN-107 10 Gbe NIC PCI-e 3.0 x4, running at x2 but still manages 10 Gbe in iperf

I am running Ubuntu 18.04 with ZFS on Linux . Primarily application right now is sharing the RAIDZ2 volume via Samba. The hard drives never spin down.

100W may not seem much to some, but it's 876 kWh per year. My marginal kWh cost is about 40 cents, so that's $350/year.

I do have the suspend feature working, as well as WOL. But was wondering if there was any way to optimize the power consumption some more.
1) any BIOS settings that might help, like ASPM . And if so, can anyone using Asus motherboards chime in ?
2) any way to get the drives to spin down other than suspending the whole machine ? When no samba client is connected, I don't see why there should be disk activity on the volume, but there still is.
3) any way to get suspend to work automatically rather than just manually ? I have tried to play with Ubuntu power management settings, but the machine never suspends on its own
4) any other way to get it to consume less ? The 6600k is a 95W TDP CPU . Would a lower-power CPU consume less at idle ? Less RAM ? Fewer fans ? I can't really tell where each of the 102W is going, so not quite sure what direction to go.
 

EffrafaxOfWug

Radioactive Member
Feb 12, 2015
1,394
511
113
From the choice of components it looks like you might've re-purposed an old gaming machine into a file server?

First, you're presumably overclocking your CPU if it's at 4.4GHz...? That itself will raise your power usage quite significantly, and likely isn't netting you any benefits.

Secondly, two HBAs clocking in at least 10W each and only five HDDs - what are these being used for?

Thirdly - hugely overpowered PSU that'll rarely go above 10% utilisation; if you've got a lower power PSU knocking about use that.

Fourthly, is the 10Gb card actually necessary?

Fifthly, yes, five big-ass fans is probably a lot to cool that, but the power usage will depend mostly on the speed; a 200mm fan spinning at 500rpm will use a tiny portion of the power it might use at 2000rpm.

Hard drives won't spin down unless they're expressly configured to, and any read/write to the array they're sitting behind will mean waiting for all the drives to spin up, introducing delays on the array and wear on the drives. Spin down will only be of any benefit if you are 100% certain that the array won't be used for upwards of ~30mins at a time, and that'll save you about 5W per drive over idle.

Suspend is a different option (and better to pursue than HDD spin-down IMHO), there are several ways to go about that depending on how the OS is set up - but most of the times the OS can never guess 100% whether it's ripe for a suspend anyway, so for servers I wanted to do this with I always ended up writing my own script to check for things like users logged in over ssh, files being used by other computers, etc etc.
 

madbrain

Active Member
Jan 5, 2019
212
44
28
Hi,

From the choice of components it looks like you might've re-purposed an old gaming machine into a file server?

First, you're presumably overclocking your CPU if it's at 4.4GHz...? That itself will raise your power usage quite significantly, and likely isn't netting you any benefits.
Yes. I have tried throttling it down to regular clock. It changed nothing to the idle power, though.

Secondly, two HBAs clocking in at least 10W each and only five HDDs - what are these being used for?
There are tons of hotswap SATA drive bays in the case - enough for 5 more SSDs and 3 more HDDs in the front.
Plus, space for one more HDD internally. That's 9 more potential SATA/SAS devices in addition to the 6 current (5 HDDs + 1 SSDs).
So, up 15 SATA devices could be hooked up to that case. The LSI controllers provide 12, and the Intel provides 6. That is 18 internal ports, so only 3 are extra.

The docks are either for future expansion, or backup. One of the card has external miniSAS and that's for future expansion and/or backup as well.

Thirdly - hugely overpowered PSU that'll rarely go above 10% utilisation; if you've got a lower power PSU knocking about use that.
I do have a very old 600W PSU around. It's not modular. Not sure how efficient it is . It may not have enough power plugs to power all the drives, docks, and fans.

Fourthly, is the 10Gb card actually necessary?
Well, nothing is really necessary in theory, but I want to keep it in there as the array performance exceeds 1 Gbe throughput.

Fifthly, yes, five big-ass fans is probably a lot to cool that, but the power usage will depend mostly on the speed; a 200mm fan spinning at 500rpm will use a tiny portion of the power it might use at 2000rpm.
Right. I can't actually tell under Linux what speed it's spinning at, as none of the sensors seem to be working The whole box is pretty quiet, however, so I don't think they are spinning too fast. I have another SSD to boot Windows on it as well where I can check how fast they spin under that OS at idle, but it might be different under Linux.

Hard drives won't spin down unless they're expressly configured to, and any read/write to the array they're sitting behind will mean waiting for all the drives to spin up, introducing delays on the array and wear on the drives. Spin down will only be of any benefit if you are 100% certain that the array won't be used for upwards of ~30mins at a time, and that'll save you about 5W per drive over idle.
I plan on having my HTPC, which has 6 OTA tuners, do OTA recordings of HD streams to the array, which could be multiple hours long. But it will also power itself down when there is no program set to record. If the NAS remains up all day, 25W could still be significant. Once I get this to work reliably, I will cancel my DVR satellite TV service. And save a bunch of watts there too, since the DVR never powers down, but the HTPC does.

Suspend is a different option (and better to pursue than HDD spin-down IMHO), there are several ways to go about that depending on how the OS is set up - but most of the times the OS can never guess 100% whether it's ripe for a suspend anyway, so for servers I wanted to do this with I always ended up writing my own script to check for things like users logged in over ssh, files being used by other computers, etc etc.
I see. For Windows, at least for file serving, I found it was usually pretty good at figuring that out itself - if there is ongoing SMB activity, it remains up. I succeeded doing automated Acronis backups over LAN / WLAN by using a using a script that sent a WOL packet on startup. Would like to do the same for the Linux NAS.
 

madbrain

Active Member
Jan 5, 2019
212
44
28
Followed the instructions at
Tip: letting your ZFS pool sleep — Rudd-O.com in English

I ran hdparm manually on all 5 HDDs. And did get the drives to spin down. Usage went down to 74.9W on the kill-a-watt .
Not sure if it's worth doing it on the SSD or if it would actually ever spin down, since it holds the OS root file system.

I'm just not sure where to save the hdparm commands so that they run again on the next boot.
This is an old article, and /etc/rc.d/rc.local does not exist on modern Ubuntu 18.04 .
 

EffrafaxOfWug

Radioactive Member
Feb 12, 2015
1,394
511
113
Yes. I have tried throttling it down to regular clock. It changed nothing to the idle power, though.
OK, good that the OC doesn't adversely affect the idle too much, but I still suspect it won't give you any additional benefit. It's extremely unlikely you'll be CPU bound by anything for the time being at least.

So, up 15 SATA devices could be hooked up to that case. The LSI controllers provide 12, and the Intel provides 6. That is 18 internal ports, so only 3 are extra.

The docks are either for future expansion, or backup. One of the card has external miniSAS and that's for future expansion and/or backup as well.
If you're not using the bays yet, and your power costs silly money, then I'd remove them until they're needed.

I do have a very old 600W PSU around. It's not modular. Not sure how efficient it is . It may not have enough power plugs to power all the drives, docks, and fans.
600W would be more than enough to power anything, even a 400W would likely be overpowered for your existing kit. I'm happily running an IVB Xeon E3, a 16-port HBA and twelve HDDs off a 350W PSU, peak load has never exceeded 250W.

As to actual efficiency, you can just go by the efficiency rating of the PSU - you need to also know where your power usage sits on the efficiency curve since PSUs are generally most efficient at ~50% load and very inefficient at <10% load. Sometimes a 300W bronze will be far more efficient than a 1kW titanium for this reason. If you've already got a power meter and the time to try it, it's well worth testing respective power draws, and with electricity as expensive as yours it might even make economic sense to buy a smaller PSU if it'll pay for itself in efficiency savings.

Well, nothing is really necessary in theory, but I want to keep it in there as the array performance exceeds 1 Gbe throughput.
Just a question of which is most important really - many people never actually need >1Gbps on their home network even if the array is capable of it, so if you've got very expensive electricity you need to make a judgement call as to whether it's worth paying to keep it. I'm a frugal Scot, so whilst I've got a fast array there's nothing I need 10Gb for, and the power+noise+switch costs basically weren't worth me keeping it.

Right. I can't actually tell under Linux what speed it's spinning at, as none of the sensors seem to be working The whole box is pretty quiet, however, so I don't think they are spinning too fast. I have another SSD to boot Windows on it as well where I can check how fast they spin under that OS at idle, but it might be different under Linux.
The sensors will likely work under lm-sensors after you've run sensors-detect and that'll hopefully show you the rpm. Without a multimeter (or maybe a fan controller that provides that function) I don't think there's a reliable way to calculate fan power draw but if you know a) the rpm and b) the fan in question you can likely eyeball a rough figure from the manufacturer's datasheet.

I plan on having my HTPC, which has 6 OTA tuners, do OTA recordings of HD streams to the array, which could be multiple hours long. But it will also power itself down when there is no program set to record. If the NAS remains up all day, 25W could still be significant.
I'm assuming you've got a good reason for having the tuner cards in your HTPC rather than hidden in a cupboard somewhere, I also assume it's not running linux...? Yeah assuming suspend and WoL work nicely on the linux side, and you can find a way of invoking the WoL from your HTPC nicely a few mins before the recording is due to start, but again something that needs to be experimented with. This bit might need its own thread ;)

I see. For Windows, at least for file serving, I found it was usually pretty good at figuring that out itself - if there is ongoing SMB activity, it remains up. I succeeded doing automated Acronis backups over LAN / WLAN by using a using a script that sent a WOL packet on startup. Would like to do the same for the Linux NAS.
Don't take my word as gospel...! ;) Just that in the past I've had linux servers that I set to suspend-after-X suspend when I didn't want them to, so always ended up writing my own scripts - basically a laundry list of things to check for (and clean up if possible) before hitting the shutdown switch. It's entirely possible the suspend detection in whatever-ubuntu-is-using-to-control-suspend takes this into account already, so it's worth having an experiment to see if this suits your usage patterns.

I ran hdparm manually on all 5 HDDs. And did get the drives to spin down. Usage went down to 74.9W on the kill-a-watt .
Not sure if it's worth doing it on the SSD or if it would actually ever spin down, since it holds the OS root file system.

I'm just not sure where to save the hdparm commands so that they run again on the next boot.
This is an old article, and /etc/rc.d/rc.local does not exist on modern Ubuntu 18.04 .
There is indeed no point running spin-down against SSDs as they have no spin other than electrons. There are some SATA power savings (ALPM) that can be done but if you're running kernel 4.15 or newer they should be in place already (and are relatively small potatoes but were a fairly big deal for laptops).

You can quite easily recreate the rc.local on debian systems if you're saddled with systemd, I still find it very useful; hopefully it'll work for ubuntu as well. Note that this creates and overwrites any existing rc.local file, you might want to search to see if there are any ubuntu-specific gotchas (never blindly run code from random forum-goers!):
Code:
echo -e "#!/bin/sh -e\n# commands below\n\n# commands above\nexit 0" > /etc/rc.local
chmod +x /etc/rc.local
systemctl enable rc-local
systemctl start rc-local
systemctl status rc-local
P.S. There are many tweaks that can be done to get linux into very low-power modes but YMMV as they can often cause hard-to-diagnose problems, so be careful what tweaks you apply. I ran into an issue once where USB auto-suspend stopped a certain keyboard from working after not being used for 60s, and another where PCI link power management in the initramfs made my NICs appear to fail.
 
Last edited:

madbrain

Active Member
Jan 5, 2019
212
44
28
OK, good that the OC doesn't adversely affect the idle too much, but I still suspect it won't give you any additional benefit. It's extremely unlikely you'll be CPU bound by anything for the time being at least.



If you're not using the bays yet, and your power costs silly money, then I'd remove them until they're needed.
Well, they are needed occasionally to backup the array, so I don't want to remove them. Not sure how they consume any energy when there isn't even a LED on when no disk is hooked up. I guess I could unplug the power temporarily and see the difference.

600W would be more than enough to power anything, even a 400W would likely be overpowered for your existing kit. I'm happily running an IVB Xeon E3, a 16-port HBA and twelve HDDs off a 350W PSU, peak load has never exceeded 250W.
My kill-a-watt showed a peak of 190W when the 5 HDDs are spinning up. 350W would work with the current HDDs, but if I filled the docks during backups, might be problematic.

As to actual efficiency, you can just go by the efficiency rating of the PSU - you need to also know where your power usage sits on the efficiency curve since PSUs are generally most efficient at ~50% load and very inefficient at <10% load. Sometimes a 300W bronze will be far more efficient than a 1kW titanium for this reason. If you've already got a power meter and the time to try it, it's well worth testing respective power draws, and with electricity as expensive as yours it might even make economic sense to buy a smaller PSU if it'll pay for itself in efficiency savings.
I'll experiment with the 600W unit I already have.

Just a question of which is most important really - many people never actually need >1Gbps on their home network even if the array is capable of it, so if you've got very expensive electricity you need to make a judgement call as to whether it's worth paying to keep it. I'm a frugal Scot, so whilst I've got a fast array there's nothing I need 10Gb for, and the power+noise+switch costs basically weren't worth me keeping it.
One of the uses for the server is as a backup server. I had been using an Odroid XU4 unit until now. It could handle 1 Gbps, but that was quite slow. It was very power efficient, though; My main desktop now has 6TB of SSDs . Full backup to Odroid would take over 12 hours, and I thought that was excessive. I am able to do the same in about 2.5 hours now with the 10 Gbps network, averaging 3.7 Gbps.

The sensors will likely work under lm-sensors after you've run sensors-detect and that'll hopefully show you the rpm.
Unfortunately, I already ran lm-sensors, and no luck at all seeing any of the fan sensors. A bit surprised given this is an old Z170 chipset.

Without a multimeter (or maybe a fan controller that provides that function) I don't think there's a reliable way to calculate fan power draw but if you know a) the rpm and b) the fan in question you can likely eyeball a rough figure from the manufacturer's datasheet.
Well, I can put the machine in suspend mode, disconnect the fan, and then look at the kill-a-watt again ...

I'm assuming you've got a good reason for having the tuner cards in your HTPC rather than hidden in a cupboard somewhere, I also assume it's not running linux...? Yeah assuming suspend and WoL work nicely on the linux side, and you can find a way of invoking the WoL from your HTPC nicely a few mins before the recording is due to start, but again something that needs to be experimented with. This bit might need its own thread ;)
Tuner cards are in my HTPC due to the location of the coax for OTA TV that come into my house. It's running Win10 with JRiver MC 23.
Not sure if I can make it invoke a WOL script or not when recording OTA. Acronis true image has the option to invoke a start/end script.
Yes, probably needs its own thread, I agre.

You can quite easily recreate the rc.local on debian systems if you're saddled with systemd, I still find it very useful; hopefully it'll work for ubuntu as well. Note that this creates and overwrites any existing rc.local file, you might want to search to see if there are any ubuntu-specific gotchas (never blindly run code from random forum-goers!):
Code:
echo -e "#!/bin/sh -e\n# commands below\n\n# commands above\nexit 0" > /etc/rc.local
chmod +x /etc/rc.local
systemctl enable rc-local
systemctl start rc-local
systemctl status rc-local
Thanks.

P.S. There are many tweaks that can be done to get linux into very low-power modes but YMMV as they can often cause hard-to-diagnose problems, so be careful what tweaks you apply. I ran into an issue once where USB auto-suspend stopped a certain keyboard from working after not being used for 60s, and another where PCI link power management in the initramfs made my NICs appear to fail.
Not as weird as when I OC'ed my RAM to the rated 3000 speed and the machine won't even post. Asus auto-OC makes it work at 2933 MHz. But then, the USB KVM device disappears a few minutes after boot. No more keyboard or mouse. Power switch is the only option.
The weird thing is that overnight memtest passes even at 2933 MHz. But somehow the USB is a problem. Same symptom regardless of whether the KVM is attached to USB 2.0, 3.0 or 3.1 controller. That was under Win10, didn't try under Linux. This is why I am running the RAM down to 2400 MHz (or 2133). Probably a motherboard issue ...
 

acquacow

Well-Known Member
Feb 15, 2017
787
439
63
42
Plenty of things you can do to save power on the CPU-side of things.

Enable speed-step
Enable c-states (including c6 states)
Disable hyperthreading
Disable vt-d
Disable any other option roms on the motherboard you aren't using
Run less DRAM

Should be able to shave 25W at least right there.

Then if you want to go farther, you can downclock the CPU and lower the max vcore as well.
 

EffrafaxOfWug

Radioactive Member
Feb 12, 2015
1,394
511
113
Well, they are needed occasionally to backup the array, so I don't want to remove them. Not sure how they consume any energy when there isn't even a LED on when no disk is hooked up. I guess I could unplug the power temporarily and see the difference.
Likewise I don't know what's going on under the HSF, but the LSI HBAs seem to get pretty toasty regardless of whether there's any I/O or even any discs connected.

My kill-a-watt showed a peak of 190W when the 5 HDDs are spinning up. 350W would work with the current HDDs, but if I filled the docks during backups, might be problematic.

I'll experiment with the 600W unit I already have.
My back-of-a-fag-packet usually rates HDDs at 15W spin-up, 10W active and 5W idle; rounding up to sixteen drives you'd use a maximum of 240W on the HDDs (almost all of it on the 12V rail) if you want to consider future expansion as well. Remember you peak power draw of the HDDs likely also includes everything else powering up too. Other than CPU you don't have that much in the way of items that can draw a ridiculous load so it'd be interesting to see how other PSUs perform. $350/yr on leccy goes a log way when looking into new kit for efficiency's sake...

One of the uses for the server is as a backup server. I had been using an Odroid XU4 unit until now. It could handle 1 Gbps, but that was quite slow. It was very power efficient, though; My main desktop now has 6TB of SSDs . Full backup to Odroid would take over 12 hours, and I thought that was excessive. I am able to do the same in about 2.5 hours now with the 10 Gbps network, averaging 3.7 Gbps.
Well depends entirely on the sort of backups you're doing I guess; mine are mostly delta rsync so usually more limited by CPU and IO than by the network speed, only my weekly disc images of boot volumes or yearly full backup peg the LAN at maximum but that's only for a couple of hours so not worth the splurge for 10GbE (2.5 or 5Gbps would be interesting if there were some decent switches at less than a king's ransom). As the backup server is only on for ~3-4hrs a week it doesn't really matter that much to me that it's not super-efficient.

Unfortunately, I already ran lm-sensors, and no luck at all seeing any of the fan sensors. A bit surprised given this is an old Z170 chipset.
Likewise surprised. Nowt from Asus on the spec sheet but from the pics it looks to be a Nuvoton chip which are generally well supported as far as sensors go.

Well, I can put the machine in suspend mode, disconnect the fan, and then look at the kill-a-watt again ...
I suspect the fans are turning slowly enough and using so little power you'll see a negligible difference in usage. Generally the best course of action is to find the fan model and look up its maximum power specs.

It's running Win10 with JRiver MC 23. Not sure if I can make it invoke a WOL script or not when recording OTA.
That's always the rub with these things. A quick search shows that JRiver has a scripting interface of sorts available, but yeesh VB brings me out in a rash. Hopefully there's something better than this available but you know the software better than I do...
Scripting Plugin - JRiverWiki

VB... :shudders: ...but some people must like it I guess.

I guess if there's no way to nicely call anything from the PVR interface then your only option is to spin down the discs isn't it...?
 

madbrain

Active Member
Jan 5, 2019
212
44
28
Plenty of things you can do to save power on the CPU-side of things.

Enable speed-step
Enable c-states (including c6 states)
Disable hyperthreading
Disable vt-d
Disable any other option roms on the motherboard you aren't using
Run less DRAM

Should be able to shave 25W at least right there.

Then if you want to go farther, you can downclock the CPU and lower the max vcore as well.
There is no hyperthreading on an i5-6600k.
All the other settings were already set except VT-d. I just disabled it and rebooted. It made no difference. Still exactly 102W idle.

What about :
PCI Express Native Power Management
PCH DMI ASPM
ASPM
DMI Link ASPM Control
PEG - ASPM
Aggressive LPM support
 

madbrain

Active Member
Jan 5, 2019
212
44
28
Likewise I don't know what's going on under the HSF, but the LSI HBAs seem to get pretty toasty regardless of whether there's any I/O or even any discs connected.
That's what I have heard. I haven't checked.

My back-of-a-fag-packet usually rates HDDs at 15W spin-up, 10W active and 5W idle; rounding up to sixteen drives you'd use a maximum of 240W on the HDDs (almost all of it on the 12V rail) if you want to consider future expansion as well. Remember you peak power draw of the HDDs likely also includes everything else powering up too. Other than CPU you don't have that much in the way of items that can draw a ridiculous load so it'd be interesting to see how other PSUs perform. $350/yr on leccy goes a log way when looking into new kit for efficiency's sake...
16 x 15 = 240W, but that's just for drives. If the CPU / motherboard / PCI-E cards / fans are taking, say, 75W idle, and 150W peak, that's 390W combined peak in theory. Since the PSU isn't 100% efficient, one would want at least 25% more, so 600W PSU seems more correct than 350W. And I'm not sure what load the PSUs are most efficient.

Well depends entirely on the sort of backups you're doing I guess; mine are mostly delta rsync so usually more limited by CPU and IO than by the network speed, only my weekly disc images of boot volumes or yearly full backup peg the LAN at maximum but that's only for a couple of hours so not worth the splurge for 10GbE (2.5 or 5Gbps would be interesting if there were some decent switches at less than a king's ransom). As the backup server is only on for ~3-4hrs a week it doesn't really matter that much to me that it's not super-efficient.
I bought a Netgear switch for $160 - GS110MX (went back up to $199 lately). Only 2 x 10G copper, plus 8x1G . Works for my backup for now, though. Gets me 10GBe between my main workstation and NAS. Won't get me 10GBe to my HTPC as I would need at least 1 more port.

Likewise surprised. Nowt from Asus on the spec sheet but from the pics it looks to be a Nuvoton chip which are generally well supported as far as sensors go.
Yeah, no idea here.

That's always the rub with these things. A quick search shows that JRiver has a scripting interface of sorts available, but yeesh VB brings me out in a rash. Hopefully there's something better than this available but you know the software better than I do...
Scripting Plugin - JRiverWiki

VB... :shudders: ...but some people must like it I guess.

I guess if there's no way to nicely call anything from the PVR interface then your only option is to spin down the discs isn't it...?
Well, the other way is to just record to a local SSD on the HTPC instead of the NAS. My current DVR's HDD for satellite has 2TB HDD, and is never full. OTA uses higher bit rates, though, due to older MPEG-2 algorithm and the compression rates being set in ATSC standards. The cheapest 1TB SSD is $110 nowadays, and there is room for more than one in the HTPC case. This will probably save power by allowing NAS to sleep many more hours, though probably not enough to pay for 2TB SSD, but maybe 1TB.
 

madbrain

Active Member
Jan 5, 2019
212
44
28
I disabled the fans. Nice to have a case that can be opened easily without screws.
Power usage dropped from 102W to about 91W when all 5 case fans were powered off.
All 5 are 3-pin fans. 4 of them are connected to motherboard headers (cha1 - cha4 on the Z170-AR) . The 5th is connected to a Molex as there are no more headers and thus runs at full speed, but is still quite. They are all Cooler master Megaflow, LED fans.

I believe the 2 top and the 1 side are this :
Cooler Master: MegaFlow 200 Blue LED Silent Fan

Front fan is the red version of the same :
Cooler Master: MegaFlow 200 Red LED Silent Fan
This one came with the case 6 years ago.

Rear fan is this, I believe :
Cooler Master: Blue LED silent fan 140mm

There are 2 more fans from the NH-D14 CPU cooler.
NH-D14
I did not try to unplug those. I'm sure the CPU could run fine at stock clock with a single of the 2 CPU fans. I probably wouldn't want to run the NAS without at least 2 of the 5 case fans.

There is also a tiny 40mm fan on one of the SATA docks (4 x 2.5) which I could probably turn off, but it wasn't convenient so I didn't try.
 
Last edited:

madbrain

Active Member
Jan 5, 2019
212
44
28
Instead of powering the fans off, I would swap them with 4pin pwm fans.
That's a consideration. I'm just not sure how useful that will be given that Linux isn't able to see the fan sensors, much less control them. The Asus BIOS does have a very elaborate Q-fan setup. I don't know if it really works under Linux or not. I can try and check. Under Windows, the Asus proprietary software allows turning off some fans completely.

3-pin fans can still be controlled in software, just by voltage instead of PWM. Except for the one fan that's on a Molex adapter.

Asus sells a fan extension card that will work on my motherboard and adds 3 fan headers.
FAN EXTENSION CARD | Motherboard Accessories | ASUS USA

But given that the motherboard built-in headers aren't seen or controllable under Linux, I don't know if this card would really help any.