I have to chime in here about long term usage of spinning down drives...
In short, my primary reason for spindown is heat. I've setup and ran many drive pools since 2007, where I designed the systems to use only 1 HDD at a time for watching a movie/streaming something. I designed the systems this way so that only 1 HDD spins up instead of an entire array to access a single file. I think that is the biggest architecture decision here to consider.
Note that now since I am upgrading everything to 10 Gbps at home, I am moving to ZFS and with that I'll keep all drives spinning in my next build (12x 10 TB drives, migrating 8 I previously had). This is because I don't want the wear and tear of all drives spinning up just to access a single file.
Heat can really build up in a closet or bedroom, much more than most know. I've had many builds, and like an earlier comment usually a drive gets full and gets replaced with much larger drives once it ages out - than a failure. Now with my new home, I've carefully handled heating and cooling loads in the server-room-under-the-stairs.
I've actually been keeping a log... Note: all drives were from various vendors, some Green, some Blue, some Baracuda, HGST, etc. Usually purchased in small batches of 3 to 4 at a time, to avoid bad-batch failures.
17x 1 TB drives: 2 failure - funny, the 2 failures were from 2 drives not marked for spin-down
22x 4 TB drives: 1 failure from an 7-year old disk, my oldest batch
8x 10 TB drives: 0 failures (still pretty new)
As for SSDs, I've had 8x SSD failures out of 15 since 2008 in laptops, desktops, and servers. And most where high-end consumer models. Now, two of those SSDs were my "cache" drives to write to my storage server first. So I killed them on purpose with TBs/mo of writing. But the other 6x SSD failures shows SSDs are just unreliable - I'd never trust my data on them.