napp-it Buyer's Guides

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

gea

Well-Known Member
Dec 31, 2010
3,156
1,195
113
DE
The napp-it Buyers's Guide is quite outdated.

While most suggestions like disks, L2Arc and Slog are propably the same/similar others may be different due different driver support, not only between Open-BSD and OmniOS/ OpenIndiana but also on Oracle Solaris the genuine, fastest and most feature rich ZFS platform.

What I would like to discuss
1. The general introduction for napp-it should be extended to Top Hardware Components for napp-it on OmniOS/ OI and Solaris NAS Servers as these are the supported platforms for napp-it. FreeNas is different as it is not open to different operating systems or OS releases. Maybe FreeNAS can be extended to FreeNAS and other Free-BSD appliances.

2. FreeNAS and napp-it Buyer's Guide should refer to the same pages for Boot Drives, Hard Drives. L2Arc and Slog.

Slog
For my own setups I completely move to Optane as Slog like

Production use: Intel Optane 4800X (due the guaranteed powerloss protection)
Performance critical Lab use: Intel Optane 900P (similar performance but without guaranteed PLP)
Lab or Home use: Intel 800P-64 (when it becomes available, from specs the number three)

For me, no room for others left. ZeusRAM is completely outdated as well as the other options like DC 3700.

L2Arc
As an L2Arc should offer around 5x RAM and never exceed 10x RAM, you want a small NVMe. The smaller Optane like the 32G cache modul are here also an option. As Optane has no problem with concurrent read/ write, even one Optane ex a 900P for ESXi, Boot, Slog and L2Arc is an option.

Larger SSD or NVMe are hardly an option.

HBA
Here there may be some differences between the systems as driver support is not identical. While LSI 2008, 3008 or 9305 based systems are always best, support for others may be different. A common list may be good optionally with remarks if one of the operating systems has no support.

Nics
Mostly Intel and Chelsio are best. Some are supported on Free-BSD, Illumos and Solaris, some lacks support on some OS ex for Solaris with Intel X552.

Top system picks
should work on any (Free-BSD, Illumos, Linux, Solaris) or not listed.

In general a common base for hardware would be easier to maintain and to keep it up to date. As there are switchers between the systems such a best for all may be an essential criteria for many (even ZoL users where hardware support is mostly better than on the Unixes BSD or Solarish).

My preferred hardware option would be to only list components with a "best for any current ZFS appliance" label with maybe an optional remark /** like not working on xx if the component is superiour.

btw
Some links and comments in the napp-it section refer to FreeNAS
 
Last edited:

vinceflynow

New Member
May 3, 2017
29
5
3
Is it still required, to create a VMFS disk from an Optane 900P or 4800X for SLOG in a napp-it all-in-one setup with Virtualized OmniOS or Openindiana? Or can we direct pass through the Optane now?
 

gea

Well-Known Member
Dec 31, 2010
3,156
1,195
113
DE
I have not seen a newer ESXi or Illumos NVMe driver that work.
 

Rand__

Well-Known Member
Mar 6, 2014
6,633
1,767
113
There was a combination of new intel nvme driver for esx and FreeNas that was apparently working but not stable (not in all cases)...
 

altano

Active Member
Sep 3, 2011
280
159
43
Los Angeles, CA
> Is it still required, to create a VMFS disk from an Optane 900P or 4800X for SLOG in a napp-it all-in-one setup with Virtualized OmniOS or Openindiana? Or can we direct pass through the Optane now?

I'm seeing a large performance penalty for this configuration, but the 900P is so insanely fast that I can't imagine the penalty being relevant if you're using the 900P as a slog/L2ARC. If you're curious about some numbers:

Optane 900P on bare metal Windows Server 2016:


Optane 900P-backed VMFS disk, served through virtualized OmniOS, as benchmarked from virtualized Windows Server 2016 (ESXi is the backing hypervisor):
 
  • Like
Reactions: Patrick