Napp-it Virtual Machine Memory Usage Critical

Discussion in 'Solaris, Nexenta, OpenIndiana, and napp-it' started by F1ydave, Dec 11, 2019.

  1. F1ydave

    F1ydave Member

    Joined:
    Mar 9, 2014
    Messages:
    132
    Likes Received:
    21
    Successfully installed OVA on ESXI 6.7u3, added pci card LSI 2008 and got this memory warning after boot. The only thing I've done other than the PCI card was to make a password.

    Napp-it Virtual Machine Memory Usage Critical

    [​IMG]
     

    Attached Files:

    #1
  2. F1ydave

    F1ydave Member

    Joined:
    Mar 9, 2014
    Messages:
    132
    Likes Received:
    21
    I solved it, though I am not sure what did it. I increased boot drive to 60gb and ram to 8gb
     
    #2
  3. sth

    sth Active Member

    Joined:
    Oct 29, 2015
    Messages:
    265
    Likes Received:
    40
    typically ZFS will consume all your available RAM for cache so you'll likely see this warning again in future. In ESXi you can just disable that warning.
     
    #3
  4. gea

    gea Well-Known Member

    Joined:
    Dec 31, 2010
    Messages:
    2,338
    Likes Received:
    784
    As sth said, this normal with ZFS but uncritical as the ZFS VM cannot consume more RAM than assigned. The size of the bootdisk is not related to RAM but with less RAM like 3GB you are always at the limit.

    With more RAM, ZFS will also fill it up over time for the ARC ramcache (and becomes faster) but free it when needed.
     
    #4
  5. F1ydave

    F1ydave Member

    Joined:
    Mar 9, 2014
    Messages:
    132
    Likes Received:
    21
    Is there a recommended ram setting? This is on my Esxi1 Box in my signature.

    I haven't setup anything yet. New to this, I have to go through your step by step next, which is my plan today.
     
    #5
  6. sth

    sth Active Member

    Joined:
    Oct 29, 2015
    Messages:
    265
    Likes Received:
    40
    As much as possible! ZFS loves its caches. You could use some of the ZFS tools which are exposed in Napp-it's GUI to see how you are doing with cache hits and tweak from there. Also, whats your use case, VMs, databases, Plex video streaming etc? They will all have different best cases and more optimal setups.
     
    #6
  7. F1ydave

    F1ydave Member

    Joined:
    Mar 9, 2014
    Messages:
    132
    Likes Received:
    21
    I have a couple of VM's (Pihole, PFsense, Docker, Windows Server, file server, steam). The main thing I was going to try out the new Windows Server 2016 or 2019 (whatever the new one is that sort of replaced Server Home). I like having domain/roaming profiles so I can just jump on. I don't technically need HA/cluster but I enjoy this stuff. Trying to setup Nappit for a SAN. The windows will run off the pool of LSI2008 with 4 - 500gb WD black 10k rpm drives. The rest will just be a separate backup/excess storage pools. For the VM's to be transferred onto.

    The 1.2TB IOdrive will be some sort of flash or partition for flash. I do have a PCIe m.2 card too with 4 empty slots that I bought just because.

    The build is ESXi Box 1: Dual E5-2960 v1, X9DR3/I-LN4F+, 128GB DDR3, One HP 1.2TB ioDrive II, Four 500GB WD Blacks on LSI2008, Four 4TB WD Red, Four 4TB HGST
     
    #7
    Last edited: Dec 12, 2019
  8. F1ydave

    F1ydave Member

    Joined:
    Mar 9, 2014
    Messages:
    132
    Likes Received:
    21
    If I create a pool from 4 disks as vdev mirror, does it automatically stripe them? Or do I need to make 2 drive mirrors and stripe them in the next step?

    I am fumbling my way through this. I made 2 - 2 drive mirrors figured out how to upgrade the zpool to 5000.
     
    #8
    Last edited: Dec 13, 2019
  9. gea

    gea Well-Known Member

    Joined:
    Dec 31, 2010
    Messages:
    2,338
    Likes Received:
    784
    If you create a pool from 4 disks in a mirror, you create a 4way mirror
    (capacity of 1 disk, 4 x readperformance, 1x write performance)

    If you want to create a Raid-10, create a pool from a 2disk mirror, then add a vdev/extend the pool with anothe rmirror.

    For an AiO with 128GB RAM, I would give the storage VM up to 40 GB (this would max out default rambased writecache up to 4GB/max 10% RAM)
     
    #9
    F1ydave likes this.
  10. F1ydave

    F1ydave Member

    Joined:
    Mar 9, 2014
    Messages:
    132
    Likes Received:
    21
    I have to say, I am really excited about this. Even though this is all new to me, Napp it is very well organized. Thank you for the help with the raid 10 creation.

    Would 40gb of ram be better than doing arc/slog or in addition to?
     
    #10
  11. gea

    gea Well-Known Member

    Joined:
    Dec 31, 2010
    Messages:
    2,338
    Likes Received:
    784
    If you give 40GB to the napp-it storage VM, you can count 1-2GB for the OS, 4 GB for write caching (2GB current cache and 2 GB "old cache" that is written in the meantime to disk as a large and fast sequential write) and say 80% of the rest (near 30 GB) for read caching. The rambased readcache is called ARC and improves small random reads only.

    You can add an NVME/SSD to extend Arc. This is called L2Arc. To organize L2Arc you need RAM (bad) and the SSD is much slower than RAM (bad) so you only want an L2Arc if you cannot extend the faster rambased Arc. With 40GB RAM I would not expect a serious advantage of an L2Arc.

    Slog is different. It is not a write cache (this is RAM) but a fast protector of the write cache. Think of it like the batterie/cache on a hardware raid. Without an Slog and when you enable a secure sync write behaviour (what you should with VMs) a onpool ZIL area of the pool is used to protect the writecache. This ZIL is faster than regular small pool writes but much slower than a good dedicated Slog for this (like Intel Optane or WD SS530-12G SAS).

    So RAM is for performance, L2Arc not needed and Slog for a fast but secure write behaviour.
     
    #11
    Last edited: Dec 13, 2019
  12. F1ydave

    F1ydave Member

    Joined:
    Mar 9, 2014
    Messages:
    132
    Likes Received:
    21
    With CopyOnWrite, I shouldn't miss the power loss feature that slog would offer me. In the last 5 years, I have only lost power twice for an extended period since I have a backup power supply.

    I want to test the read/write speed of the Iodrive2 vs my striped mirrored WD blacks. Which test is best to use on Napp-IT?

    It appears I may have to install drivers for the IOdrive2 to be recognized. Any idea as to how I install the solaris drivers? I have them mounted in a an ISO atm.

    [​IMG]

    Solaris Info said to use LS command for media
     
    #12
    Last edited: Dec 16, 2019
  13. gea

    gea Well-Known Member

    Joined:
    Dec 31, 2010
    Messages:
    2,338
    Likes Received:
    784
    Your IOdrive may work in Oracle Solaris but its not on the Illumos HCL.

    For performance tests in napp-it you can run Pools > Benchmark.
    This is a series of filebench random and sequential tests with sync enabled and disabled.
     
    #13
  14. F1ydave

    F1ydave Member

    Joined:
    Mar 9, 2014
    Messages:
    132
    Likes Received:
    21
    I guess that solves that. I am not about to try to build my own driver from another linux driver.
     
    #14
  15. F1ydave

    F1ydave Member

    Joined:
    Mar 9, 2014
    Messages:
    132
    Likes Received:
    21
    I have been able to successfully map my windows 7 to NFS.

    I am struggling to get my vCenter to connect to the NFS share. I want to host my future VM's on the NFS. I keep going back to the idea that my vmkernal/vSwitch are set up incorrectly. Any ideas on what I am doing wrong? The only suggestion I have had was to update the VMware Tools...which I am not sure how to since its Omni in a Solaris 11 and it doesn't update upon reboot automatically.



    [​IMG]

    [​IMG]

    [​IMG]

    [​IMG]

    [​IMG]
    Error:
    [​IMG]
     
    #15
    Last edited: Jan 3, 2020
  16. gea

    gea Well-Known Member

    Joined:
    Dec 31, 2010
    Messages:
    2,338
    Likes Received:
    784
    What filesystem (pool/filesystem)?

    If the pool/filesystem is tank/data you must enter /tank/data in screenshot3. Then be sure that the folder /tank/data has a permission set of everyone=modify (set from napp-it or Windows as root via SMB recursively)

    About vmware tools
    On OmniOS they are in the repo: pkg install open-vm-tools
    Package Search

    On Solaris 11, you must use the original VMware tools. They are available as a separate download from VMware.
     
    #16
  17. F1ydave

    F1ydave Member

    Joined:
    Mar 9, 2014
    Messages:
    132
    Likes Received:
    21
    That reply helped me figure out that NFS was actually defaulted off still. Connected!

    Thank you Gea
     
    #17
  18. F1ydave

    F1ydave Member

    Joined:
    Mar 9, 2014
    Messages:
    132
    Likes Received:
    21
    Everything has been going well. I am not entirely sure I have my vmx3 setup correctly since vcenter in 6.7 is so different from 5.5 which I knew how to use. I am missing an IP for some reason...I may have to delete it all and rebuild the vmkernals and switch.

    Anyway, I click logs just for the hell of it and came across this:

    [​IMG]

    Is there a way for me to figure out where this is coming from? I only have Esxi, this windows 7 PC a new Server 2016 Essentials (which is hosted on the zfs) and an older windows server (which I am in the process of replacing) connected to Napp-IT. Should I disconnect them all and monitor reconnecting them?

    I also wanted to know what it would take to add encryption to my files?
     
    #18
    Last edited: Jan 16, 2020
  19. gea

    gea Well-Known Member

    Joined:
    Dec 31, 2010
    Messages:
    2,338
    Likes Received:
    784
    OmniOS only logs the failed SMB access as an anonymous/guest, not where it comes from. You can only poweroff the Windows hosts and power them on one by one to check which one is respnsible, then check for the setting. Beside that, this is uncritical.

    Native ZFS encryption is available after an OS update to OmniOS 151032 stable followed by a pool upgrade. Then create an encrypted filesystem and copy/replicate data onto, see 4. at napp-it // webbased ZFS NAS/SAN appliance for OmniOS, OpenIndiana and Solaris : Manual
     
    #19
    F1ydave likes this.
  20. F1ydave

    F1ydave Member

    Joined:
    Mar 9, 2014
    Messages:
    132
    Likes Received:
    21
    Oh yeah, turn them off...thats easier than reconnecting, lol. That's why you make the big bucks!
     
    #20
Similar Threads: Napp-it Virtual
Forum Title Date
Solaris, Nexenta, OpenIndiana, and napp-it Expand rpool virtual drives [ESXi & Napp-It] Jan 10, 2014
Solaris, Nexenta, OpenIndiana, and napp-it OmniOS, Napp-it, smb share and ad local domain group 35 minutes ago
Solaris, Nexenta, OpenIndiana, and napp-it napp-it Windows Sharing Permissions Feb 7, 2020
Solaris, Nexenta, OpenIndiana, and napp-it Is there plans to integrate a VM manager into Napp-IT? Jan 28, 2020
Solaris, Nexenta, OpenIndiana, and napp-it napp-it 14b AIO upgrade path? Jan 12, 2020

Share This Page