Napp-it Virtual Machine Memory Usage Critical

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

F1ydave

Member
Mar 9, 2014
137
21
18
Successfully installed OVA on ESXI 6.7u3, added pci card LSI 2008 and got this memory warning after boot. The only thing I've done other than the PCI card was to make a password.

Napp-it Virtual Machine Memory Usage Critical

 

Attachments

F1ydave

Member
Mar 9, 2014
137
21
18
I solved it, though I am not sure what did it. I increased boot drive to 60gb and ram to 8gb
 

sth

Active Member
Oct 29, 2015
379
91
28
typically ZFS will consume all your available RAM for cache so you'll likely see this warning again in future. In ESXi you can just disable that warning.
 

gea

Well-Known Member
Dec 31, 2010
3,141
1,182
113
DE
As sth said, this normal with ZFS but uncritical as the ZFS VM cannot consume more RAM than assigned. The size of the bootdisk is not related to RAM but with less RAM like 3GB you are always at the limit.

With more RAM, ZFS will also fill it up over time for the ARC ramcache (and becomes faster) but free it when needed.
 

F1ydave

Member
Mar 9, 2014
137
21
18
Is there a recommended ram setting? This is on my Esxi1 Box in my signature.

I haven't setup anything yet. New to this, I have to go through your step by step next, which is my plan today.
 

sth

Active Member
Oct 29, 2015
379
91
28
As much as possible! ZFS loves its caches. You could use some of the ZFS tools which are exposed in Napp-it's GUI to see how you are doing with cache hits and tweak from there. Also, whats your use case, VMs, databases, Plex video streaming etc? They will all have different best cases and more optimal setups.
 

F1ydave

Member
Mar 9, 2014
137
21
18
As much as possible! ZFS loves its caches. You could use some of the ZFS tools which are exposed in Napp-it's GUI to see how you are doing with cache hits and tweak from there. Also, whats your use case, VMs, databases, Plex video streaming etc? They will all have different best cases and more optimal setups.
I have a couple of VM's (Pihole, PFsense, Docker, Windows Server, file server, steam). The main thing I was going to try out the new Windows Server 2016 or 2019 (whatever the new one is that sort of replaced Server Home). I like having domain/roaming profiles so I can just jump on. I don't technically need HA/cluster but I enjoy this stuff. Trying to setup Nappit for a SAN. The windows will run off the pool of LSI2008 with 4 - 500gb WD black 10k rpm drives. The rest will just be a separate backup/excess storage pools. For the VM's to be transferred onto.

The 1.2TB IOdrive will be some sort of flash or partition for flash. I do have a PCIe m.2 card too with 4 empty slots that I bought just because.

The build is ESXi Box 1: Dual E5-2960 v1, X9DR3/I-LN4F+, 128GB DDR3, One HP 1.2TB ioDrive II, Four 500GB WD Blacks on LSI2008, Four 4TB WD Red, Four 4TB HGST
 
Last edited:

F1ydave

Member
Mar 9, 2014
137
21
18
If I create a pool from 4 disks as vdev mirror, does it automatically stripe them? Or do I need to make 2 drive mirrors and stripe them in the next step?

I am fumbling my way through this. I made 2 - 2 drive mirrors figured out how to upgrade the zpool to 5000.
 
Last edited:

gea

Well-Known Member
Dec 31, 2010
3,141
1,182
113
DE
If you create a pool from 4 disks in a mirror, you create a 4way mirror
(capacity of 1 disk, 4 x readperformance, 1x write performance)

If you want to create a Raid-10, create a pool from a 2disk mirror, then add a vdev/extend the pool with anothe rmirror.

For an AiO with 128GB RAM, I would give the storage VM up to 40 GB (this would max out default rambased writecache up to 4GB/max 10% RAM)
 
  • Like
Reactions: F1ydave

F1ydave

Member
Mar 9, 2014
137
21
18
If you create a pool from 4 disks in a mirror, you create a 4way mirror
(capacity of 1 disk, 4 x readperformance, 1x write performance)

If you want to create a Raid-10, create a pool from a 2disk mirror, then add a vdev/extend the pool with anothe rmirror.

For an AiO with 128GB RAM, I would give the storage VM up to 40 GB (this would max out default rambased writecache up to 4GB/max 10% RAM)
I have to say, I am really excited about this. Even though this is all new to me, Napp it is very well organized. Thank you for the help with the raid 10 creation.

Would 40gb of ram be better than doing arc/slog or in addition to?
 

gea

Well-Known Member
Dec 31, 2010
3,141
1,182
113
DE
If you give 40GB to the napp-it storage VM, you can count 1-2GB for the OS, 4 GB for write caching (2GB current cache and 2 GB "old cache" that is written in the meantime to disk as a large and fast sequential write) and say 80% of the rest (near 30 GB) for read caching. The rambased readcache is called ARC and improves small random reads only.

You can add an NVME/SSD to extend Arc. This is called L2Arc. To organize L2Arc you need RAM (bad) and the SSD is much slower than RAM (bad) so you only want an L2Arc if you cannot extend the faster rambased Arc. With 40GB RAM I would not expect a serious advantage of an L2Arc.

Slog is different. It is not a write cache (this is RAM) but a fast protector of the write cache. Think of it like the batterie/cache on a hardware raid. Without an Slog and when you enable a secure sync write behaviour (what you should with VMs) a onpool ZIL area of the pool is used to protect the writecache. This ZIL is faster than regular small pool writes but much slower than a good dedicated Slog for this (like Intel Optane or WD SS530-12G SAS).

So RAM is for performance, L2Arc not needed and Slog for a fast but secure write behaviour.
 
Last edited:

F1ydave

Member
Mar 9, 2014
137
21
18
With CopyOnWrite, I shouldn't miss the power loss feature that slog would offer me. In the last 5 years, I have only lost power twice for an extended period since I have a backup power supply.

I want to test the read/write speed of the Iodrive2 vs my striped mirrored WD blacks. Which test is best to use on Napp-IT?

It appears I may have to install drivers for the IOdrive2 to be recognized. Any idea as to how I install the solaris drivers? I have them mounted in a an ISO atm.



Solaris Info said to use LS command for media
 
Last edited:

gea

Well-Known Member
Dec 31, 2010
3,141
1,182
113
DE
Your IOdrive may work in Oracle Solaris but its not on the Illumos HCL.

For performance tests in napp-it you can run Pools > Benchmark.
This is a series of filebench random and sequential tests with sync enabled and disabled.
 

F1ydave

Member
Mar 9, 2014
137
21
18
Your IOdrive may work in Oracle Solaris but its not on the Illumos HCL.

For performance tests in napp-it you can run Pools > Benchmark.
This is a series of filebench random and sequential tests with sync enabled and disabled.
I guess that solves that. I am not about to try to build my own driver from another linux driver.
 

F1ydave

Member
Mar 9, 2014
137
21
18
I have been able to successfully map my windows 7 to NFS.

I am struggling to get my vCenter to connect to the NFS share. I want to host my future VM's on the NFS. I keep going back to the idea that my vmkernal/vSwitch are set up incorrectly. Any ideas on what I am doing wrong? The only suggestion I have had was to update the VMware Tools...which I am not sure how to since its Omni in a Solaris 11 and it doesn't update upon reboot automatically.












Error:
 
Last edited:

gea

Well-Known Member
Dec 31, 2010
3,141
1,182
113
DE
What filesystem (pool/filesystem)?

If the pool/filesystem is tank/data you must enter /tank/data in screenshot3. Then be sure that the folder /tank/data has a permission set of everyone=modify (set from napp-it or Windows as root via SMB recursively)

About vmware tools
On OmniOS they are in the repo: pkg install open-vm-tools
Package Search

On Solaris 11, you must use the original VMware tools. They are available as a separate download from VMware.
 

F1ydave

Member
Mar 9, 2014
137
21
18
What filesystem (pool/filesystem)?

If the pool/filesystem is tank/data you must enter /tank/data in screenshot3. Then be sure that the folder /tank/data has a permission set of everyone=modify (set from napp-it or Windows as root via SMB recursively)

About vmware tools
On OmniOS they are in the repo: pkg install open-vm-tools
Package Search

On Solaris 11, you must use the original VMware tools. They are available as a separate download from VMware.
That reply helped me figure out that NFS was actually defaulted off still. Connected!

Thank you Gea
 

F1ydave

Member
Mar 9, 2014
137
21
18
Everything has been going well. I am not entirely sure I have my vmx3 setup correctly since vcenter in 6.7 is so different from 5.5 which I knew how to use. I am missing an IP for some reason...I may have to delete it all and rebuild the vmkernals and switch.

Anyway, I click logs just for the hell of it and came across this:



Is there a way for me to figure out where this is coming from? I only have Esxi, this windows 7 PC a new Server 2016 Essentials (which is hosted on the zfs) and an older windows server (which I am in the process of replacing) connected to Napp-IT. Should I disconnect them all and monitor reconnecting them?

I also wanted to know what it would take to add encryption to my files?
 
Last edited:

gea

Well-Known Member
Dec 31, 2010
3,141
1,182
113
DE
OmniOS only logs the failed SMB access as an anonymous/guest, not where it comes from. You can only poweroff the Windows hosts and power them on one by one to check which one is respnsible, then check for the setting. Beside that, this is uncritical.

Native ZFS encryption is available after an OS update to OmniOS 151032 stable followed by a pool upgrade. Then create an encrypted filesystem and copy/replicate data onto, see 4. at napp-it // webbased ZFS NAS/SAN appliance for OmniOS, OpenIndiana and Solaris : Manual
 
  • Like
Reactions: F1ydave

F1ydave

Member
Mar 9, 2014
137
21
18
Oh yeah, turn them off...thats easier than reconnecting, lol. That's why you make the big bucks!