Note this howto is about 5 years old now. If I were building an illumos-based storage server today, I'd be more likely to use as a distribution one of:
OmniOS - Great lightweight general purpose UNIX OS. Installs to disk.
SmartOS - Next generation hypervisor OS, boots from ephemeral source (PXE, USB stick, CD, etc) and sets all storage aside into a zfs pool for guest zones.
I've been running SmartOS on an N40L for years with a SmartOS guest zone running Netatalk and sharing files out to my Macs. I'm in the process now of re-architecting this storage onto OmniOS for the benefit of having a more general purpose global zone and super easy IPv6 implementation.
Up to this year, Oracle Solaris was the most feature rich and the fastest ZFS server in my tests. It comes with native ZFS that supports encryption, sequential fast resilvering, vdev remove (unlike Open-ZFS with support for all vdevs incl Raid-Z), dedup2, SMB 3.1 and NFS 4.1 among other features.
This year 2019 was the changing point feature wise. Open-ZFS now includes encryption, fast sequential (sorted) resilvering, special vdevs, vdev remove (basic and mirror only,) system checkpoints, trim, force ashift and many other features. They are now all in Illumos, the free Solaris fork and therefor in OpenIndiana (that is ongoing Illumos with a huge repository that includes desktop apps) and OmniOS, a storage related distribution with a freeze of Illumos in a stable and long term stable and a commercial support option with often be-weekly security and bugfix updates - perfect for a robust and stable production storage server.
Current version of OmniOS is 151032 with all the new Open-ZFS features, SMB 3.02, many updates related to hardware and Comstar iSCSI
The above enterprise ready Unix operating systems are managed via CLI or you can install a desktop environment (Solaris and OpenIndiana). As an add on, you can use my napp-it web-ui tool to manage all ZFS related items (pools, (special) vdevs, zvols, filesystems, snaps, clones, trim, raid management, share management, enclosure management incl a disk map, smart, remote replication, acl management, realtime monitoring, acceleration with background agents, ZFS clustering etc). You can update/bugfix the OS independently from napp-it updates. There is no lock in between them (or a special hardware environment).
The new version does not only checks disk smart by smart tests (ok/failed) but also analyze several smart values that may predict a future failure. Up from a certain value I display a smart warning in disk/smart overview.
Are the disks then bad: No
But in a critical production system, I would replace. At home/lab I would propably ignore and wait until a disk fails really.
Background Smart checks can be disabled in Services > ACC