Recommend HBA with low power consumption and at least 16 sata drives supported?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

bleomycin

Member
Nov 22, 2014
54
6
8
37
I made the mistake (I think) of ordering a LSI 9300-16i. I should have done more research but the heatsink alone was a big enough giveaway. System idle power shoots up by 30 watts when this thing is plugged in with no drives attached. It also doesn't seem to support ASPM according to powertop and lspci?

Does anyone know of anything more modern that will consume significantly less power for my use case of 16 sata drives for use with zfs or possibly unraid? The annual cost of 30 watts at my utility rate is over $100/yr combined with this machine living in a closet where 30 additional watts makes a significant difference on temperatures.
 

Stephan

Well-Known Member
Apr 21, 2017
945
714
93
Germany
Only thing I can think of is getting four of those m.2 JB585 SATA 5-port cards and one of those x16 PCIe carrier boards for four m.2. I think you need a PCIe port that can bifurcate in this case.

This SATA chip is solid, AHCI compatible, runs on basically any OS that is less than 20 years old. ASPM needs to be carefully weighed, some drives do not like it. You will get many complaints from kernel and ZFS.

You WILL need some slight airflow over the cards though, some 120mm 800rpm I'd say. Additional power could be only 5 watts, maybe. I have no way to measure. But small fry m.2 heat sinks should not be able to dissipate more than 2 watts each.

The problem might be reaching a break-even money-wise ahead of three years.

Edit: JB585 is x2 so four cards would only need x8 PCIe. Not sure if such a carrier card exists though.

Edit2: JB585 also exists as full-height 5-port PCIe x2 cards. If you have two PCIe slots with at least x2, you get 10 ports. If your motherboard has 6 more, you are done.
 
  • Like
Reactions: gb00s and bleomycin

bleomycin

Member
Nov 22, 2014
54
6
8
37
Only thing I can think of is getting four of those m.2 JB585 SATA 5-port cards and one of those x16 PCIe carrier boards for four m.2. I think you need a PCIe port that can bifurcate in this case.

This SATA chip is solid, AHCI compatible, runs on basically any OS that is less than 20 years old. ASPM needs to be carefully weighed, some drives do not like it. You will get many complaints from kernel and ZFS.

You WILL need some slight airflow over the cards though, some 120mm 800rpm I'd say. Additional power could be only 5 watts, maybe. I have no way to measure. But small fry m.2 heat sinks should not be able to dissipate more than 2 watts each.

The problem might be reaching a break-even money-wise ahead of three years.

Edit: JB585 is x2 so four cards would only need x8 PCIe. Not sure if such a carrier card exists though.

Edit2: JB585 also exists as full-height 5-port PCIe x2 cards. If you have two PCIe slots with at least x2, you get 10 ports. If your motherboard has 6 more, you are done.
Thanks that's definitely an out of the box solution! Unfortunately my motherboard only supports bifurcation on one slot and it's already being taken up by my nvme storage. I noticed the 9400-16i is rated at 12 watts typical power here: https://docs.broadcom.com/doc/BC00-0459EN

The money is definitely a factor but the heat is really the largest factor, every watt saved in this closet is noticed especially come summer time. Just not sure if I could still do significantly better?
 

Stephan

Well-Known Member
Apr 21, 2017
945
714
93
Germany
Can you post a link to the card you ordered? Maybe it's a three chip solution, 2x SAS (10W each) and a PCIe switch (another 8-10W).
 

sko

Active Member
Jun 11, 2021
249
131
43
Bigger savings than ASPM come from using drive standby solution like hd-idle btw.
or just dumping spinning rust and replacing everything with SSDs and/or NVMes.
I've shaved almost 50W while increasing pool performance by orders of magnitudes by replacing a variety of 8 HDDs and some SATA SSDs with 6x SAS SSDs and 4 new NVMes...

Ignoring the fact that putting a server in a closed space is a really bad idea per se; additionally putting 16 spinning disks in such an environment is even worse. Those drives won't live long AND they will heat that closet up even more, so those few Watts from the controller will be your smallest problem...
Also those fans spinning at full speed very likely also use up a considerable amount of energy.

Putting spinning disks to sleep will wear them out in no time - you will spend more on replacing them than ever saving on energy. Additionally, ZFS has to be able to do its periodical writes and housekeeping, so it will wake them up anyways. Given that the latencies will easily jump into the range of seconds if those drives are turned off, they will also constantly drop out of the pool (or any soft- or hardware RAID for that matter).

So if you really want/have to save on energy, go for SSDs and NVMes that have low power ratings and support low power states (i.e. not samsung...).
If you are tied to HDDs by storage space requirements, use at least fewer and bigger disks.

As for the HBA: You could also try to use e.g. an 9[34]00-i8 (=single chip) and a backplane with port expander (IIRC the usually are in the range of ~5-10W), but with SATA disks there *might* be some weird behaviour depending on how well their firmware deals with STP (never encountered problems here, but YMMV)
If that server is running 5 (or even 10+) year old hardware, the biggest savings on power and heat emission can be gained by replacing that to something like e.g. a more recent Atom or Xeon-D depending on your compute requirements....
 

i386

Well-Known Member
Mar 18, 2016
4,251
1,548
113
34
Germany
why do people say "nvme"? :D

it's a protocol used by pcie connected ssds that come in different form factors (m.2, pcie add on card, 2.5" u.2/u.3, esdff and others)
 
  • Like
Reactions: Aluminat

mattventura

Active Member
Nov 9, 2022
449
218
43
The 9305-16i is less of a power/heat hog than the 9300-16i, but isn't much cheaper than the 9400-16i. I've had issues with direct-attach backplane LEDs on the 9400, but I'm not sure if those issues would also be present on the 9305 or 9300. You didn't mention whether you have a dumb backplane, no backplane, or a managed backplane, so that may or may not matter to you.

There's also the options of using either an expander backplane as mentioned a couple posts up, as well as a standalone SAS expander, but I'd question how much power savings there would be. Expander BPs are nice because they avoid the cabling mess you'd typically expect from dealing with direct-attach drives.