Hi,
I am currently trying to optimise my home server a bit for more power saving, as it produces lots of heat.
So I enabled ASPM in the BIOS, switched to "powersave" CPU frequency governor using intel_pstate, and enabled energy saving options in the BIOS.
According to powertop, my processor now spends most of its time (>95%) in C6 state, I think this is good as it allowed me to save some 50 to 60 Watts, which is quite substantial.
I am wondering if I can save more power on the network cards and the SAS cards. Not because I hope my electricity bill becomes much lower, but because I hope I can lower a bit the temperature of these cards. Electronics love it when it is not too hot, so I think I can enhance a bit the endurance of my components if they run cooler.
I did a bit of research and found the following about ASPM in my server:
so it seems, my two SAS3008 cards do not support ASPM. I am not surprised about this and I now bought a new 9400-16i card. It probably does also not support ASPM, but at least I found in its datasheet that this card has way less power dissipation than my two SAS3008. So I expect already a bit of improvement from this.
Further, I can see that my two Intel X520 NICs do support ASPM, however, lspci claims "ASPM Disabled". Why is this, can I enable it somehow, and is it worth it or will it lead to the network connection being interrupted all the time?
So I have my 2 NICs and 2 SAS cards that probably cannot do ASPM; if we look at the above list, we find that these devices remain
that support ASPM but have it disabled. However I don't know what these devices are, it is probably something from the chipset. Would it be worth to enable ASPM on these as well, is it something I can probably find in the BIOS or can it be enabled only manually from the command line?
At least I can say that, enabling ASPM and powersave frequency governor saved not only a lot of electricity, but it also makes my little closet where I run this server much much cooler. I now probably can even install more quiet fans. I check the system temperatures using ipmitool, and I have only one part that worries me, it's the "CPU_VRMIN Temp" where I wonder if I can get this one also a bit cooler, probably by putting somewhere also another fan.
I have a Supermicro X12SPL board, and a Supermicro 743AC-668B chassis where I installed also a rear exhaust fan. I previously used a Papst fan, this was very good (lots of througput) but noisy, so I switched to a Noctua which has less throughput but is almost inaudible.
The goal of this exercise would be that everything becomes even a bit more cooler, so I can probably lower the fan speeds more and get this thing even quieter.
Except the CPU_VRMIN, my two hard disks are now the hottest parts in the system, with 37 degrees C. I have also a couple SSDs which are basically cold (like 27 degrees or so). (By the way - would it be worth it to enable IDLE_A, IDLE_B and IDLE_C on the SSDs, such that they can go to idle power states as well?)
I am currently trying to optimise my home server a bit for more power saving, as it produces lots of heat.
So I enabled ASPM in the BIOS, switched to "powersave" CPU frequency governor using intel_pstate, and enabled energy saving options in the BIOS.
According to powertop, my processor now spends most of its time (>95%) in C6 state, I think this is good as it allowed me to save some 50 to 60 Watts, which is quite substantial.
I am wondering if I can save more power on the network cards and the SAS cards. Not because I hope my electricity bill becomes much lower, but because I hope I can lower a bit the temperature of these cards. Electronics love it when it is not too hot, so I think I can enhance a bit the endurance of my components if they run cooler.
I did a bit of research and found the following about ASPM in my server:
Code:
root@pve0:~# lspci -vv | awk '/ASPM/{print $0}' RS= | grep --color -P '(^[a-z0-9:.]+|ASPM )'
00:1c.0 PCI bridge: Intel Corporation C620 Series Chipset Family PCI Express Root Port #1 (rev fa) (prog-if 00 [Normal decode])
LnkCap: Port #1, Speed 8GT/s, Width x1, ASPM L1, Exit Latency L1 <16us
LnkCtl: ASPM L1 Enabled; RCB 64 bytes, Disabled- CommClk+
00:1c.1 PCI bridge: Intel Corporation C620 Series Chipset Family PCI Express Root Port #2 (rev fa) (prog-if 00 [Normal decode])
LnkCap: Port #2, Speed 8GT/s, Width x1, ASPM L1, Exit Latency L1 <16us
LnkCtl: ASPM L1 Enabled; RCB 64 bytes, Disabled- CommClk+
00:1c.2 PCI bridge: Intel Corporation C620 Series Chipset Family PCI Express Root Port #3 (rev fa) (prog-if 00 [Normal decode])
LnkCap: Port #3, Speed 8GT/s, Width x1, ASPM L1, Exit Latency L1 <16us
LnkCtl: ASPM L1 Enabled; RCB 64 bytes, Disabled- CommClk-
00:1c.3 PCI bridge: Intel Corporation C620 Series Chipset Family PCI Express Root Port #4 (rev fa) (prog-if 00 [Normal decode])
LnkCap: Port #4, Speed 8GT/s, Width x1, ASPM L1, Exit Latency L1 <16us
LnkCtl: ASPM L1 Enabled; RCB 64 bytes, Disabled- CommClk-
00:1c.5 PCI bridge: Intel Corporation C620 Series Chipset Family PCI Express Root Port #6 (rev fa) (prog-if 00 [Normal decode])
LnkCap: Port #6, Speed 8GT/s, Width x1, ASPM L1, Exit Latency L1 <16us
LnkCtl: ASPM L1 Enabled; RCB 64 bytes, Disabled- CommClk+
01:00.0 Ethernet controller: Intel Corporation I210 Gigabit Network Connection (rev 03)
LnkCap: Port #0, Speed 2.5GT/s, Width x1, ASPM L0s L1, Exit Latency L0s <2us, L1 <16us
LnkCtl: ASPM L1 Enabled; RCB 64 bytes, Disabled- CommClk+
02:00.0 Ethernet controller: Intel Corporation I210 Gigabit Network Connection (rev 03)
LnkCap: Port #0, Speed 2.5GT/s, Width x1, ASPM L0s L1, Exit Latency L0s <2us, L1 <16us
LnkCtl: ASPM L1 Enabled; RCB 64 bytes, Disabled- CommClk+
05:00.0 PCI bridge: ASPEED Technology, Inc. AST1150 PCI-to-PCI Bridge (rev 06) (prog-if 00 [Normal decode])
LnkCap: Port #0, Speed 5GT/s, Width x1, ASPM L0s L1, Exit Latency L0s <512ns, L1 <32us
LnkCtl: ASPM L1 Enabled; RCB 64 bytes, Disabled- CommClk+
50:02.0 PCI bridge: Intel Corporation Device 347a (rev 04) (prog-if 00 [Normal decode])
LnkCap: Port #5, Speed 16GT/s, Width x8, ASPM L1, Exit Latency L1 <16us
LnkCtl: ASPM Disabled; RCB 64 bytes, Disabled- CommClk+
51:00.0 Serial Attached SCSI controller: Broadcom / LSI SAS3008 PCI-Express Fusion-MPT SAS-3 (rev 02)
LnkCap: Port #0, Speed 8GT/s, Width x8, ASPM not supported
LnkCtl: ASPM Disabled; RCB 64 bytes, Disabled- CommClk+
89:02.0 PCI bridge: Intel Corporation Device 347a (rev 04) (prog-if 00 [Normal decode])
LnkCap: Port #13, Speed 16GT/s, Width x8, ASPM L1, Exit Latency L1 <16us
LnkCtl: ASPM Disabled; RCB 64 bytes, Disabled- CommClk+
89:04.0 PCI bridge: Intel Corporation Device 347c (rev 04) (prog-if 00 [Normal decode])
LnkCap: Port #15, Speed 16GT/s, Width x8, ASPM L1, Exit Latency L1 <16us
LnkCtl: ASPM Disabled; RCB 64 bytes, Disabled- CommClk+
8a:00.0 Serial Attached SCSI controller: Broadcom / LSI SAS3008 PCI-Express Fusion-MPT SAS-3 (rev 02)
LnkCap: Port #0, Speed 8GT/s, Width x8, ASPM not supported
LnkCtl: ASPM Disabled; RCB 64 bytes, Disabled- CommClk+
8b:00.0 Ethernet controller: Intel Corporation Ethernet 10G 2P X520 Adapter (rev 01)
LnkCap: Port #2, Speed 5GT/s, Width x8, ASPM L0s, Exit Latency L0s unlimited
LnkCtl: ASPM Disabled; RCB 64 bytes, Disabled- CommClk+
8b:00.1 Ethernet controller: Intel Corporation Ethernet 10G 2P X520 Adapter (rev 01)
LnkCap: Port #2, Speed 5GT/s, Width x8, ASPM L0s, Exit Latency L0s unlimited
LnkCtl: ASPM Disabled; RCB 64 bytes, Disabled- CommClk+
Further, I can see that my two Intel X520 NICs do support ASPM, however, lspci claims "ASPM Disabled". Why is this, can I enable it somehow, and is it worth it or will it lead to the network connection being interrupted all the time?
So I have my 2 NICs and 2 SAS cards that probably cannot do ASPM; if we look at the above list, we find that these devices remain
Code:
50:02.0 PCI bridge: Intel Corporation Device 347a (rev 04) (prog-if 00 [Normal decode])
LnkCap: Port #5, Speed 16GT/s, Width x8, ASPM L1, Exit Latency L1 <16us
LnkCtl: ASPM Disabled; RCB 64 bytes, Disabled- CommClk+
89:02.0 PCI bridge: Intel Corporation Device 347a (rev 04) (prog-if 00 [Normal decode])
LnkCap: Port #13, Speed 16GT/s, Width x8, ASPM L1, Exit Latency L1 <16us
LnkCtl: ASPM Disabled; RCB 64 bytes, Disabled- CommClk+
89:04.0 PCI bridge: Intel Corporation Device 347c (rev 04) (prog-if 00 [Normal decode])
LnkCap: Port #15, Speed 16GT/s, Width x8, ASPM L1, Exit Latency L1 <16us
LnkCtl: ASPM Disabled; RCB 64 bytes, Disabled- CommClk+
At least I can say that, enabling ASPM and powersave frequency governor saved not only a lot of electricity, but it also makes my little closet where I run this server much much cooler. I now probably can even install more quiet fans. I check the system temperatures using ipmitool, and I have only one part that worries me, it's the "CPU_VRMIN Temp" where I wonder if I can get this one also a bit cooler, probably by putting somewhere also another fan.
Code:
root@pve0:~# ipmitool sdr
CPU Temp | 35 degrees C | ok
PCH Temp | 36 degrees C | ok
System Temp | 31 degrees C | ok
Peripheral Temp | 34 degrees C | ok
CPU_VRMIN Temp | 52 degrees C | ok
VRMABCD Temp | 40 degrees C | ok
VRMEFGH Temp | 38 degrees C | ok
Inlet Temp | no reading | ns
DIMMA~D Temp | 32 degrees C | ok
DIMME~H Temp | 33 degrees C | ok
....
The goal of this exercise would be that everything becomes even a bit more cooler, so I can probably lower the fan speeds more and get this thing even quieter.
Except the CPU_VRMIN, my two hard disks are now the hottest parts in the system, with 37 degrees C. I have also a couple SSDs which are basically cold (like 27 degrees or so). (By the way - would it be worth it to enable IDLE_A, IDLE_B and IDLE_C on the SSDs, such that they can go to idle power states as well?)