New to Napp-it

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

F1ydave

Member
Mar 9, 2014
137
21
18
I have no experience with zfs. I read through all the Napp-it website and the new onepager. I am still unsure of the best way to go about this.

The pool/vdev is confusing to me.

I have been using ESXi since 5.0 with just each drive a vmfs datastore. At one point I did have Veem or another backing up the VM's but broke that trying out different things. Since then I have been just using an image backup within windows server to a NAS through iSCSI. I really like the ability to add VM's back after a hypervisor update. A friend of mine is trying to get me on unraid, I currently use a 3-2-1 backup to his home for offsite with openvpn.

Just built a new homelab and would really like to try out Napp-it or starwinds vSAN.

ESXi 6.7u2
Supermicro X9DRI-LN4F (1.10) - Dual XEON 2690v1
128gb of DDR3 ECC REG
90GB Intel3500 mirrored in a startech raid cage
LSI2008 PCIe- 4 - 500GB WD Blacks (10,000 rpm)
900GB Fusion IOdrive2 (NVME PCIe) overprovisioned from 1.2gb (USED 300 million GB writes/reads)
4 - 4TB WD Red
4 - 4TB Seagates

NAS
Asustor Intel Atom - 10 bay - 4TB HGST on raid 6

I have 2 - 120gb samsung SSD 840EVO's laying around

Network:
Ubiquiti USG 4 Pro
Ubiquiti 24 Port 250w

Its an empty hypervisor at the moment. I plan to run the Windows Server 2019 Essentials and a couple ESXi appliances. I would ideally like to partition the IOdrive for cache and then use the other partition to run windows server for HA or decently (I've read this might not be possible). It is a used fusionIO card...so I would be looking for easy setup/backup if the card ever failed. I would be open to buying another card to mirror or any suggestions. I will probably start expanding the drives in the next year or two to 8-10 TB's.
 

gea

Well-Known Member
Dec 31, 2010
3,141
1,182
113
DE
The pool/vdev is confusing to me.
Maybe you know a traditional Raid-5 array.
Such a raid array is used to improve sequential performance of the disks (equal to sum of datadisks). The whole array is presented to the OS like a fast single disk.

On ZFS replace the expression "Raid-5 array" with "Raid-Z1 vdev". From a first view they are the same regarding performance advantage and redundancy (One disk is allowed to fail).

If you additionally want to improve random io, you can stripe a second Raid-5 what makes it a Raid-50. Compared to the same Raid-5 it doubles sequential performance and random io performance. On ZFS you call it a datapool build from two ZFS vdevs. While on a Raid-50 you cannot stripe more arrays, on ZFS you can add mor vdevs. Each one gives you a higher sequential and random performance.

If you think of Raid-6 or Raid 60, the comparable vdev type on ZFS is Raid-Z2. Unlike traditional Raid, ZFS knows even Z3 (where 3 disks are allowed to fail in a vdev).

So why you call it Raid-Z 1/2 and not Raid 5/6?

Main reason is that traditional Raid 1/5/6 has a huge problem called write hole phenomen, "Write hole" phenomenon in RAID5, RAID6, RAID1, and other arrays.

As disks are updated (write raid stripe disk by disk, data followed by metadata updates) sequentially any crash during writes can corrupt the Raid and/or the filesystem. A hardwareraid with cache/BBU protection can reduce the problem a little. ZFS does not have this problem as it is CopyOnWrite where an atomic write is done completely or discarded. Additionally ZFS adds checksums to all data and metadata. Even on a mirror with a corrupted mirror part, ZFS knows which half is corrupted and which is valid. Traditional Raid cannot fix such a problem. It cannot even detect the problem.

Additionally ZFS does not use partitions of a fixed size like older filesystem do. If you add a new vdev that increases capacity, every ZFS filesystem can use the additional space. You can control this with quotas and reservations. The concept is called storage virtualisation (Virtualising the Storage appliance software is a different matter).

Regarding your idea.
For this I brought up the All-in-One idea ten years ago.
It means using ESXi as a type-1 virtualized below everything and virtualizing a storage appliance beside all other guest VMs with best of all support by ESXi. For all VMs and other storage needs you use ZFS over iSCSI, NFS or SMB.

OmniOS is a perfect storage OS. It is a just enough storage OS. Everything what you need for storage is included but nothing else. It is stable up from 2GB (more RAM=faster) and has quite the lowest memory footprint of all ZFS options.
 
Last edited:

F1ydave

Member
Mar 9, 2014
137
21
18
Do I understand this correctly? That you would have 2 vdevs to mirror or stripe? Then the pool would be the partition(s)?

Are there any concerns with software array? What happens if the omni OS corrupted or the VM? Do you lose the whole array/pool?

It really does sound like zfs is much better. I have never been a big fan of raid because in all my experiences I lose the data eventually. It was always a nightmare trying to expand to bigger disks. The reason I rebuilt my server 6 years ago was that I was using an old HP Server on scsi and I upgraded the 300gb drives to 600gb in a 10 disk raid 5 array...yet I couldn't expand the array...even though it was using them...luckily they only cost me $30 each at the time.
 
Last edited:

gea

Well-Known Member
Dec 31, 2010
3,141
1,182
113
DE
No.
ZFS is "pooled storage". A pool is the entitity where you create your filesystems. Filesystems are like partitions on a conventional disk or raid-array. But unlike partitions, ZFS filesystems have no fixed size. They can grow dynamically up to the whole available poolsize.

To create a ZFS pool you select one or more disks and select the raid array type (basic/single disk, mirror, Raid-Z - ZFS calls this raid array "vdev"). To increase pool capacity and performance, you can add as many vdevs as you like. The pool is like a Raid-0 stripe over the vdevs. I have seen pools build from over 20 mirrors.

For a typical AiO (ESXi + virtualized storage appliance) you mostly use a fast VM pool ex from a SSD/NVMe mirror and a second pool for filer and backup use, ex a Pool from a single Raid-Z2 with 4-10 disks.

ZFS Raid is very robust. It is very hard to have a corrupted ZFS pool. This happens mainly if more disks fail than the redundancy allows (ex 2 disks in a Z2) or on a serious hardware problem ex overvoltage or a bad controller. For such a disaster you need backup - even with ZFS.

All raid information is on pool. This mean that you can simply insert the disks into another ZFS server and import the datapool with all settings intact, in case of a Solaris server even sharing settings and Windows ntfs alike ACL permissions.

To expand a Pool you can add more vdevs (Raid arrays) or you can replace all disks with larger ones. When all are replaced you can use the increased capacity.

Currently (this feature is under way) you cannot expand a single Raid-Z vdev ex from 5 to 6 disks.
 

F1ydave

Member
Mar 9, 2014
137
21
18
I am reading your all in one, is a zil device worth it if you have a backup power supply?

I have 2 - 100GB Intel710 on a raid mirror that I was planning on hosting the VM with.
 
Last edited:

gea

Well-Known Member
Dec 31, 2010
3,141
1,182
113
DE
On a crash during writes, the content ot the rambased writecache is lost per default. This can happen even with a PSU. Thanks CopyOnWrite this cannot corrupt a ZFS filesystem but it can corrupt a guest filesystem of a VM.

On a conventional hardwareraid you can use a controller with a Flash/BBU protection to reduce the risk. On ZFS you can enable sync to fully protect the rambased writecache.

If you enable sync, every single small commited write is logged either onpool to a ZIL or a dedicated fast Slog device. Such an Slog can be much faster than the onpool ZIL logging.

If you create a VM pool from a mirror of Intel 710, I would simply enable sync to enable onpool ZIL logging. The Intel 710 have powerloss protection so the VM pool is safe -although not the fastest.
 

gea

Well-Known Member
Dec 31, 2010
3,141
1,182
113
DE
I can confirm two problems with current ESXi 6.7u2
and the ova build from first 6.7

1. checksum error
download current ova or open the ova (ex with 7zip) and remove the .mf file
Error when i deploye OVA in my ESX

2. ESXi crashes during import
ESXi 6.7 U2 Web client crashes when importing OVF |VMware Communities


I have placed a textfile in the downloadfolder folder:
(maybe easiest is to install OmniOS from scratch)

If you have problems importing an ova:

- an older ova may not be compatible with current ESXi
- a newer ova may not be compatible with an older ESXi

So best is to use an ova created with or for your ESXi version

Some problems can be fixed, see
Error when i deploye OVA in my ESX

There is also a problem with current ESXi 6.7 U2 (crash during deployment)
ESXi 6.7 U2 Web client crashes when importing OVF |VMware Communities

On problems or if you need an older or newer version of OmniOS:
- manually create a VM with the OmniOS boot iso.
(Type Solaris 64 bit, min 3 GB RAM, HD min 30GB, nic e1000, nic vmxnet3)

- install current OmniOS (upload iso to datastore and connect to DVD drive)
- install napp-it per wget -O - www.napp-it.org/nappit | perl
- install open-vm-tools per pkg install open-vm-tools

use e1000 for management and vmxnet3 for data (much faster)
 
Last edited: