Minimal hardware for napp-it all-in-one -- final system configuration

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

leecallen

New Member
Jan 20, 2014
18
7
3
My "cheap" ($850) mini-ITX Napp-it All-in-One system is now up and running. Since I asked for (and received) advice here in building the system I thought I would share the details of the build, and my candid thoughts on the results.

Thanks to gea for his guidance and support. TBH some of the decisions I made were against his advice (mini-ITX form factor, non-ECC memory).

Purpose: NAS and ESXi virtualization host for my home office / lab. I do development and testing and I need VMs for various operating systems and vintages - going back to Windows XP and Red Hat Linux 8.0. I keep *everything* on my NAS and it's important I never lose anything, even (especially) if I mistakenly delete or overwrite something.

Goals: Our family moved into a different home and my home office is now much smaller. So I wanted to consolidate 3 servers (VM host, NAS, computationally intensive processing) into 1. I wanted this system to be inexpensive, compact, quiet, and relatively high performance - mostly by combining the VM host and storage into a single box, so the VM datastores are local instead of accessed over a LAN.

I made some mistakes along the way and learned some difficult lessons, all reflected here.

Choices I made:
  • Mini-ITX. I work from home in a small office. I am consolidating 3 servers into one and I just want it to be small. The main trade-offs I encountered with mITX: (as gea warned me):
    • reduced selection of motherboards (esp if one needs ECC memory)
    • only two DIMM slots
    • only one PCIe slot - this exacerbated a problem, see "Mistakes" below.
  • Non-ECC memory. Selecting a motherboard that supports ECC (if they even exist in the mITX form factor?) would have increased the cost of this system significantly. I don't think it's justified for my purposes.
  • Because those two DIMM slots limit my ability to increase memory in the future, I installed 2x 32GB DIMMs for 64GB of memory. I often need to have Windows Server and SQL Server and other VMs running at the same time so this will be plenty of memory.
  • My disk space needs are well under 1TB, so I configured 3x 1TB SSD drives in a RAIDZ1 array for 1TB usable space.
  • I know about the limitations of RAIDZ1 with large disks. I assume those issues are less of a concern with SSD (because it would re-silver much more quickly). And I am very good with...
  • Backup: I take periodic snapshots, and backups to an on-premises NAS (Synology single disk) and to rsync.net.
  • Boot from an M.2 NVMe 1TB which contains ESXi and OmniOS, and pass the LSI SAS controller through to OmniOS. I did not (yet) mirror the NVMe disk.
  • I chose an inexpensive CPU, the Intel Core i5-12600K with 10 cores / 16 threads at 3.7GHz. I need those 10 cores to support my VMs and I frequently run some analytics that use all available cores.
  • No SLOG. After doing some reading I am not sure whether I need it, and how I might implement it. This system is UPS protected.
  • I could have used the motherboard onboard SATA controller and passed the individual disks through to OmniOS, but I installed a LSI SAS controller and I pass that through instead.
  • ESXi accesses the OmniOS ZFS datastore via NFS.

Components and prices: Most items were purchased from NewEgg on Aug 11, 2023. The price would have been lower if I had shopped around for the components.
  • Chassis: Thermaltake V1 mITX chasses $40
  • Motherboard: ASRock H670M-ITX/AX. LGA-1700, 2x DDR4-5000, 1x PCIe_5.0x16, 2x M.2, 4x SATA3, Intel Gb LAN. $170
  • CPU: Intel Core i5-12600K. Alder Lake, LGA-1700, 125W, 10-core. $179
  • Memory: Kingston 64GB DDR4-3600 (2x 32GB). DDR4-3600. $120
  • Boot drive: WD Black SN850x M.2 1TB, PCIe 4.0 x4. $60
  • Storage: Crucial MX500 1TB SATA3. Qty 3 x $48 = $144.
  • Power supply: Corsair RM750e, a standard ATX PSU, 750w. $100
  • CPU cooler: Noctua NF-A9. $53
  • LSI SAS controller purchased used on eBay for $22 (Inspur LSI YZCA-00424-101).
  • Mini SAS (SFF-8643) to 4x SATA Cable $16
  • Misc: NewEgg gave me some kind of bundle/promo discount for -$72
Total cost: $842

Mistakes and lessons learned:
  • My biggest mistake: I initially purchased an Intel i5-12600KF CPU - note the final F, which indicates it does not provide graphics, so the motherboard HDMI doesn't work - a graphics adapter is required. I did not know that! Adding a graphics adapter is only a $50 problem but it uses the only PCIe slot and I need it for the SAS controller. So I changed out the CPU for a i5-12600K (no F).
  • Diagnosing the above problem took a lot of time and some expense: first, this was my first PC build in many years. And the only symptom of the problem was a black screen, so the cause could be almost anything. In the process of figuring things out I purchased some extra hardware I did not end up using. I also played with an alternative configuration utilizing the 'F' CPU and a graphics adapter, and the motherboard's onboard SATA controller. That worked, but I prefer the LSI SAS approach. So I replaced the CPU.
  • The 2nd NVMe M.2 slot doesn't seem to work. I have seen other reports of this problem in the NewEgg reviews of this motherboard. Now that my system is fully up and running I may take another run at resolving that. But I probably won't - I do not want to disassemble this (now production) server again.
  • I thought I could just copy my VMWare Workstation files ( *.vmx, *.vmdk ) over and run them. But I had to "Export to OVA" and then "Import" them, which required another non-trivial diagnostic effort and a more time consuming migration process.
  • ESXi 7.0 installation requires this parameter: cpuUniformityHardCheckPanic=FALSE. I later saved that setting via System Settings - Kernel - "set -s cpuUniformityHardCheckPanic -v FALSE". Thanks go Google I figured this one out pretty quickly.

The system is up and running, performing automatic snapshots, and all of my VMs are working. I am very happy with it.

I still need to set up automatic backups to my Synology NAS and rsync.net, which I will do in the next several days.

I am satisfied with the cost of the final system configuration but not with the money I wasted along the way, diagnosing the display issue. But I will re-use the leftover components.

I am thrilled with the performance. The VMs suspend and resume instantly, vs multiple minutes on my old configuration (remember the VMs accessed their storage over a 1Gb LAN, now it's local). Windows Server and SQL are very fast, esp disk I/O which is now local and SSD. My 'computationally intensive processing' previously ran on an old, dedicated Dell R710 rack server with 2 CPUs, 16 threads. Now it runs on a VM on this server. But the new CPU is twice as fast, and i allocate 12 threads to this VM when I need to do the number crunching, so those programs are now running much faster.

I love the small form factor, and the complete silence of the system - it's nice not to hear the disks thrashing every 15 minutes when the filesystems are backed up (because they are now SSD) and I don't hear the fans at all.

All in all I am very happy with the results, but less than happy with the process I went through to get here. It was a learning experience, but I didn't need/want another learning experience, I just needed to get this system up and running.
 
  • Like
Reactions: Marsh

SnJ9MX

Active Member
Jul 18, 2019
130
83
28
Skimmed through your post - the storage bit jumped out. 3x 1TB drives in RAIDZ1 should yield ~2 TB usable, not 1.
 

leecallen

New Member
Jan 20, 2014
18
7
3
Skimmed through your post - the storage bit jumped out. 3x 1TB drives in RAIDZ1 should yield ~2 TB usable, not 1.
You're quite right, and my mistake goes deeper than a typo. I intended to have a configuration that would survive two disk failures - RAID-Z2 - and that is what I should have configured.

So now I have twice the available space but not the redundancy I wanted.

Thank you for pointing this out.
 

SnJ9MX

Active Member
Jul 18, 2019
130
83
28
You're quite right, and my mistake goes deeper than a typo. I intended to have a configuration that would survive two disk failures - RAID-Z2 - and that is what I should have configured.

So now I have twice the available space but not the redundancy I wanted.

Thank you for pointing this out.
In that case, why not just do a 3 disk mirror? Same data on all 3 disks. If you have 1 TB disks and only want 1 TB usable, just mirror them all together. Should be more performant due to not needing to do parity calcs on each read/write.
 

leecallen

New Member
Jan 20, 2014
18
7
3
In that case, why not just do a 3 disk mirror? Same data on all 3 disks. If you have 1 TB disks and only want 1 TB usable, just mirror them all together. Should be more performant due to not needing to do parity calcs on each read/write.
That's what I intended.
 

leecallen

New Member
Jan 20, 2014
18
7
3
You're right I was being confusing. I am thinking of RAIDZ2 which does survive loss of 2 disks (in a 3-disk configuration).
 

gea

Well-Known Member
Dec 31, 2010
3,168
1,196
113
DE
You're right I was being confusing. I am thinking of RAIDZ2 which does survive loss of 2 disks (in a 3-disk configuration).
Z2 allows any two disks to fail but a 3way mirror makes more sense with 3 disks.
The mirror is as fast on writes and iops as the Z2 but 3 times as fast on reads.
 

vjeko

Member
Sep 3, 2015
73
2
8
63
Just curious - what are you using to access the VM's - another pc/ESXI VMRC, PCoIP or ?
I bought a Lenovo TS140 (mistake - can't handle more than 4 drives) and a PCoIP zero client which is being replaced by blast
- all in the aim of trying to throw everything into one box which I wanted to keep in the cellar.
 

leecallen

New Member
Jan 20, 2014
18
7
3
" what are you using to access the VM's "

I am not sure what you mean. I am accessing the VMs through their native methods -- ssh or VNC for the *nix machines, Remote Desktop for the Windows Server and Windows desktop VMs, etc.

Also I have a web browser pointed at ESXi which allows me to manage the VMs and access them through the VMWare console.

And a web browser pointed at the OmniOS VM.