Power management for Ubuntu 18.04 Linux NAS ?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

madbrain

Active Member
Jan 5, 2019
212
44
28
I setup a NAS with Ubuntu 18.04 and ZFS and 5x10TB drives in RAIDZ2. It's working great. The only concern is the power usage. It uses about 100W idle and would thus use 876 kWh per year. At PG&E's rates, that can be quite expensive - between $200 and $300 per year.

I have WoL working already. I can initiate it from my smartphone to wake up the NAS, or any other computer in the house. Acronis per-backup scripts can turn on the NAS as needed as well.

What I haven't figured out is how to get the NAS to automatically shut down during periods of inactivity.
I previously used Windows 7 and 10 as a file server, and with the "power saver" profile, the server would automatically go to sleep mode if there was no ongoing network activity - ie. no writes to a shared drive. That was pretty easy.

How can I make the same thing happen on my Ubuntu NAS ?
 

Marsh

Moderator
May 12, 2013
2,645
1,496
113
What is the purpose of your NAS? Is it for media file ?
What is your system spec? 100 watt idle is very high? Do you use the machine to host VM as well ?
Do you want to spin down drives to save power?
 

madbrain

Active Member
Jan 5, 2019
212
44
28
What is the purpose of your NAS? Is it for media file ?
What is your system spec? 100 watt idle is very high? Do you use the machine to host VM as well ?
Do you want to spin down drives to save power?
Mainly as an SMB file server to backup the other computers in the home, which have lots of storage. Media server also possibly, with clients that support WoL like Jriver MC, or manual wake up through WOL app. The wakeup part is already working - it's the "go back to sleep automatically when not in use" part which is not.

I don't use VMs. Spinning down the five 10TB SATA WD100EMAZ drives would only bring idle power down by 15W. I have already experimented with spinning them down and measured that with a kill-a-watt.

This is a fairly powerful computer, with all 7 slots on the motherboard filled, and which can handle a large number of SATA or SAS drives, and I don't think there is any way to reduce the power usage significantly without making it a much less powerful one. Even if I could bring down the total idle wattage to, say, 50W, I would still want the power saving feature to put it to sleep when not in use. I think it would be comfortable running it 24/7 if it was 10-15W total. The case fans alone take about that much according to my Kill-a-watt.
I designed the computer to be very powerful on purpose. I never intended it to run 24x7 . I assumed there was a way under Linux to do power management, as I have been doing that very simply under Windows for years. Is this not the case ? The main reason I switched the box to Linux was lack of ZFS on Win10, and not wanting to pay big $$$ for Windows server and whatever software storage solution MS has these days for a home NAS. I see that there is actually a ZFS port of it to Windows, but I don't know if it's really reliable enough to run it or not.

I will post the full hardware specs below, but please understand that I'm not looking to make any hardware changes here, only software changes. Responses telling me to remove sticks of RAM, shrink the PSU, or removing PCIe controllers, won't be helpful. There is a very good reason for every single piece of hardware in that machine, and it needs to stay. I don't want the thread to get derailed on hardware issues.

With that said, the full hardware specs are as follows :

Cooler Master HAF-XM case with 5 case fans : 3 x 200m fans, 1 x 230mm fan, 1 x 140mm fan. All are 3-pin, except one of the 200mm which was just replaced with a Noctua 4-pin model.
Raidmax RX-1200AE PSU
Asus Z170-AR motherboard
i5-6600k CPU
Noctua NH-D14 cooler (dual fan cooler, both 3 pins)
2x16GB DDR4-3000 RAM (running at 2400 speed)
1 x Aquantia AQN-107 10 Gbps ethernet NIC, PCIe x4
1 x LSI 9207-8i SAS controller, PCIe 3.0 x8
1 x LSI 9207-4i4e SAS controller, PCIe 3.0 x8
3 x Silicon Image 3132 eSATA controllers, PCIe 1.0 x 1 (for 3 x SANS external 4-port eSATA port multiplier enclosures, not supported on LSI controller, or Intel controller)
1 x Adaptec 29320 PCI SCSI controller (for an old external HP DDS-4 tape drive)
5 x WD100EMAZ 10TB SATA drives
1 x Kingston 96GB SSD (OS boot drive)
Drive docks with plenty of remaining drive bays in the main case for backup drives. 3 x 3.5 hotswap in the front, 5 x 2.5 hotswap in the front. Also have space remaining for one more 1 x 3.5 internally . All bays are hooked up to either the LSI SAS or Intel SATA controller.
USB 3.0 4-port front hub, with card reader, in a 5.25 bay. Could be removed to add space for 2 more SATA drives (1 x 3.5, 1 x 2.5 ) if needed.

As I said, it's a fairly beefy machine that I don't want to run 24 x 7, but only when needed.
 

nerdalertdk

Fleet Admiral
Mar 9, 2017
228
118
43
::1
I would guess its all you PCI cards thats takings Watts
this is easy 30 watt

1 x Aquantia AQN-107 10 Gbps ethernet NIC, PCIe x4 (6 W at 10 Gbps, 4W at 5 Gbps full length 100 m Cat6a)
1 x LSI 9207-8i SAS controller, PCIe 3.0 x8 (Power consumption (typical): 9.8 W)
1 x LSI 9207-4i4e SAS controller, PCIe 3.0 x8 (Power consumption (typical): 9.8 W)
1 x Adaptec 29320 (Can't find watt usage but my guess is around 10w)
3 x Silicon Image 3132 eSATA (Can't find watt usage but my guess is around 10-15w total)

You can't change this with software
 

madbrain

Active Member
Jan 5, 2019
212
44
28
I would guess its all you PCI cards thats takings Watts
this is easy 30 watt
You are probably right about this part.

You can't change this with software
I am not sure why you would say that. When the machine is in sleep mode, it consumes < 3W .

All I'm asking is how to make suspend happen automatically when the NAS isn't in use, rather than manually by pressing the power button, which I mapped to "suspend" . I haven't setup remote access on the NAS yet, so can't suspend it remotely, but I want suspend to happen automatically, not manually, as there may be several users of the NAS, and manual suspend would be bad if there is another user. Manual wake-up with WOL is OK.
 

madbrain

Active Member
Jan 5, 2019
212
44
28
Thanks ! I do have the GUI and just set it up. Looks like the shortest interval is 15 minutes. Now I just need to figure out how smart the "idle" logic is.
Well, it's not smart at all. I was streaming a video file from the NAS, and the machine went to sleep. On Windows, this would keep the server from sleeping.

So, how do I get power management to work as intended and not suspend when files are being read/written to by network clients ?
 

EffrafaxOfWug

Radioactive Member
Feb 12, 2015
1,394
511
113
Yeah in my experience the various power management utils available rarely go into the right amount of (or indeed any) detail in to what the server is doing so I always end up scripting it. My backup server for example has a script that runs every hour that shuts it down only when all conditions are met.

A similar thing could be done for a "Suspend after N iterations as long as X, Y or Z isn't happening"; let's say you run the script every 10mins and every time it finds X, Y or Z aren't happening it adds +1 to a value in a file, and when that value reaches, say, 6 (thus representing an interval of 1hr), issue the suspend command (what command this is also depends on a number of factors e.g. systemd vs. other suspend mechanisms).

Rather depends how comfortable you are with scripting but the underlying logic is quite simple.
 

madbrain

Active Member
Jan 5, 2019
212
44
28
I'm comfortable with scripting, but not sure where I would gather the right information about the SMB daemon activity, in terms of, say, having any clients with open files, and the time of the most recent I/O by a client (read or write).
I'm surprised this isn't a built-in feature of the OS, or there isn't a package that can do this simple thing. Not going to sleep during SMB activity has to be a fairly common use case.
 

nerdalertdk

Fleet Admiral
Mar 9, 2017
228
118
43
::1
You could make an cronscript that check SMB activity.

smbstatus -Ll (List locked files on smb shares)
smbstatus

then if you don't see an locked file with in the last 30 min go to sleep ?
 

PigLover

Moderator
Jan 26, 2011
3,186
1,545
113
Install "hd-idle" & configure it for a good "sleep" timer for your drives (5-10 min?). Run a script that checks drive status and once they have all gone idle put the baby to sleep too.
 

madbrain

Active Member
Jan 5, 2019
212
44
28
You could make an cronscript that check SMB activity.

smbstatus -Ll (List locked files on smb shares)
smbstatus

then if you don't see an locked file with in the last 30 min go to sleep ?
Unfortunately, this command only shows the time the file was first locked, not the time of the last I/O . For example, if I stream a video file, the time shown for that file never updates. So, this method won't work.
 

EffrafaxOfWug

Radioactive Member
Feb 12, 2015
1,394
511
113
That's why I think you'd need to move the timer logic into the script itself and not rely on times reported by smbstatus; assuming client X is playing a file, when it reaches the end of the file the lock should close accordingly. Script sees there aren't any locks (or are less than Y locks), increments the counter, counter eventually reaches Z, run the suspend command.
 

madbrain

Active Member
Jan 5, 2019
212
44
28
assuming client X is playing a file, when it reaches the end of the file the lock should close accordingly.
That may or may not be the case. For example, when mounting an ISO, the lock will stay on until one unmounts the file. Maybe that's the desired behavior, but I think relying on that lock will probably just cause the server to stay up.
I just tested this. I put the client machine to sleep. But the lock on the server remained up.
 

EffrafaxOfWug

Radioactive Member
Feb 12, 2015
1,394
511
113
Just so I understand your logic, you mean that when a client is hibernated/powered off the lock is still present on the server? How long do you have to wait before it disappears?

Back in the 2003 era there used to be a parameter to close locks early and also kill any other locks from the same IP, but I don't think it's around any more. I think to do it these days you'd need to tweak the TCP socket timeout or deadtime options to limit the amount of time smbd will hold on to a phantom lock, but it's not something I've experimented much with.

Edit: looking at the manpage, the deadtime default is 0 (i.e. non-expiring) but only takes effect if there are no open files so I think if you want the locks to close quickly you'd need to tweak the socket options first.
 
Last edited:

madbrain

Active Member
Jan 5, 2019
212
44
28
Just so I understand your logic, you mean that when a client is hibernated/powered off the lock is still present on the server? How long do you have to wait before it disappears?
Yes, that's what I mean. And I don't know how long it takes for the lock to disappear. I haven't played with it enough yet.

Back in the 2003 era there used to be a parameter to close locks early and also kill any other locks from the same IP, but I don't think it's around any more. I think to do it these days you'd need to tweak the TCP socket timeout or deadtime options to limit the amount of time smbd will hold on to a phantom lock, but it's not something I've experimented much with.

Edit: looking at the manpage, the deadtime default is 0 (i.e. non-expiring) but only takes effect if there are no open files so I think if you want the locks to close quickly you'd need to tweak the socket options first.
Thanks. Can you be more specific about what those TCP socket options are and how to change them ?
 

EffrafaxOfWug

Radioactive Member
Feb 12, 2015
1,394
511
113
I've not really done any fiddling with them for the best part of a decade - there's a brief listing of the parameters in the smb.conf man page. I think you'd probably want to fiddle with TCP_KEEPIDLE but from a quick search around a similar thing I found this thread on serverfault of someone doing what sounds like a similar thing. Documentation for these various options is all over the place but I haven't found a single authoritative source for all of them. For instance here's the linux doc on the TCP_KEEP* params:
Code:
    TCP_KEEPCNT (since Linux 2.4) The maximum number of keepalive probes TCP should send before dropping the connection. This option should not be used in code intended to be portable.

    TCP_KEEPIDLE (since Linux 2.4) The time (in seconds) the connection needs to remain idle before TCP starts sending keepalive probes, if the socket option SO_KEEPALIVE has been set on this socket. This option should not be used in code intended to be portable.

    TCP_KEEPINTVL (since Linux 2.4) The time (in seconds) between individual keepalive probes. This option should not be used in code intended to be portable.
In particular someone came up with the following example:
Code:
socket options = TCP_NODELAY SO_KEEPALIVE TCP_KEEPIDLE=30 TCP_KEEPCNT=3 TCP_KEEPINTVL=3
...which'll apparently close a lock (or, rather, the TCP session) approx ~40s after last access; I think that's rather low-balling it myself but I'd recommend trying each option one at a time and seeing where that gets you; changing these is mucking with the underlying TCP stack so you need to exercise caution and expect weird bugs to reveal themselves.
 
  • Like
Reactions: madbrain