@jjoyceiv
So to clarify..for a small office serving 5 people.. serving them what? Desktops, NAS data, just the owncloud data stores? You will need to fairly carefully plan out the required iops and network bandwidth for anything other than just file sharing or this will cause you choke points... anywho... I have been setting up an all in one at home to replace my aged media server and also run a OS X instance to do security cameras.. I run plex on a ubuntu vm, OS X,observium, vcenter vcsa, and others on my esxi box and things are starting to come together.
The hardware setup is a dual l5640 with 48gb of RAM (you will run out of ram before anything else in esxi get the most you can afford) a lsi 9212 4e4i (I only have room for one card), just the single mobo nic.
For drive arrangement
BOOT ESXI: internal USB header, cheap 6gb thumb drive (best practices esxi runs in memory)
SECOND BOOT: Observium THEN Napp-it VM. 60gb cheap SSD to onboard sata (a esxi VMFS datastore)
This second boot is critical as napp-it is providing the NFS shares esxi is looking to for the VMs
I would keep this on a cheap ssd as booting from something slower sometimes causes timeout
ESXI is a PITA for UPS Mgmt unless you have a networked UPS and even then its a PITA
I double purpose the observium vm to run apcupsd connected to my usb UPS to control shutdown
Therefor it HAS to boot first and shutdown LAST to make that work
NAPP-IT boots and gets the LSI card through passthrough thus has hardware level control of that card and the 2x intel 3500 SSDs running a stripe for speed and max storage space for the VMs... it also gets both 8TB hard drives passed to it via RDM from mobo sata ports. This works just fine and is pretty transparent.. I dont know if I am taking some write performance hits this way as the que depth of the drives this way I think is 32.. but its just a data drive for media mostly so its mostly read intensive and scrubs of this pool is just as fast as raw tests on the drives before the build. Ideally another lsi would be the way to go but I cant fit one in the box. So now we have 2 pools up and running .. on fast ssd based pool for the VMs and a large data pool... keeping the io latency critical stuff on the lsi as passing RDM from sata is a questionmark.. also .. since you are booting nappit from a sata port... you can't pass nappit the sata controller so this RDM passing of individual drives is a requirement at this point.
ESXI then gets NFS share from the napp-it SSD pool for the VMs and they begin to autoboot in the sequence you specify through vcenter. Getting that working without using vcenter is a crapshoot as best.
I will not get into the internal and external networking as gea has good guides and there are a million question marks on how to optimize so you will have to experiment.. personally .. if you have some VMs that need lots of network bandwidth get a 4 port card and pass some ports directly to the VM vs having all that traffic virutaulized but thats just a opinion
For your setup on the drives.. you have or will get a 8 port card and only listed 8 drives for you working sets.. 6x6 and 2 ssd for pools.. you could probably keep them on the same lsi card and just use the sata on the mobo for the ssd for nappit.. you could boot esxi from it too but its a waste
Trade offers and decisions...
One.. zfs raid/mirrors whatever.. is not a backup.. lets repeat that.. its not a backup.. raidz and mirrors exist for one reason.. to repair data and keep you limping along online without having to bring the datastore down for repair.
I backup everything .. hence the 4e4i card.. the 4e goes 8088 cable to a 24disk shelf running a backup storage pool also controlled by napp-it when I need to run backups.. therefor my online datasets are setup for speed and storage capacity while the backup array is setup for redundancy .. I am comfortable running this way you may not.. depends on your tolerance for downdtime and criticality of the online pools between backups.
BIG QUESTION MARKS
The SSD based VM pool on SSD with capacitor protection
Zfs and hardware raid are fundamentally different on sync write protection.. and only ZFS guarantees data integrity.. google hardware raid write hole.. even with battery backup .. you can have silent data corruption.
Some say that sync writes are not needed at all.. personally I leave it on for peace of mind since VMDKs corrupt easily and just hitting the power off vs shutdown guest is like yanking the cord on the VM and you will do it... so when zfs tells your os that the write has been committed its best to have that guarantee..
In ZFS this guarantee is provided by ZIL.. it is either in one of 2 places.. its either on the pool with the rest of your data... or its on a seperate log device.. a SLOG. If you keep your ZIL on the pool.... the ZIL only hold the data between commits which is about 5 seconds of data before zfs commits it to a regular write.. so what happens here is that the data is sent to storage. Rather than wait til zfs does a flush to write the data out of ram.. for a sync write.. it LOGS the write to the ZIL... and then commits the write.. but it commits it from ram.. its faster.. the ZIL / SLOG never gets read unless there is a power outage that 1 takes out the ram and 2 on reboot .. the ZIL/SLOG has uncommitted writes.. then it REPLAYS what should have been written.. reported back to the os as completed but died in RAM on power loss... its a 5 second non volitile scrachpad.. more below.
So knowing that we have to deal with sync writes this is what I have found.. and ZFS is still very much a black box as far as the algorithms used and you will find disagreement at almost every turn
Sync writes generally write data TWICE... it was designed that way for spinning media to return the ACK for the write as quick as possible so ZFS shotguns the initial write wherever the heads happens to be .. its scattered all over the platters and makes a mess but its quick.. then zfs re commits the writes and lays the tracks down in a more efficient manner on the second write 5 second or so later from RAM... this behavior does change based on the blocksize of the data written.. and also tuning settings under the hood of zfs but that is generally how it works without a log device.. with a log device... the first write goes to this log device and zfs still does the final commit from RAM to the spinning media in order to lay down bigger more efficient tracks...( unless as stated above there is a power failure) with spinning disks and heavy sync writes that actually commit to zil.. an ssd speeds this up.. but if you are on a SSD based pool. .. having a SSD zil really doesnt get you performance but it keeps your SSDs from writing data twice and also reduces Swiss cheasing you spinning platters as a copy on write zfs file system doesnt rewrite... so on SSD pool. You will notice very quickly on a pool that has sync workload that fragmentation goes up quickly and you cant defrag a zfs drive.. no big deal on a SSD.. not great for platters. You would need to use a nvme or zeusram to get a speed boost with SSD..
Further .. on a SSD based sync write pool with adiquate UPS backup and capacitor backed SSD.. some say that setting logbias=thoughput vs latency (default) bypasses the ZIL altogether and only writes that data once.. forcing the sync write to commit before ACK.. but on stripped SSDs that have no latency could be faster than with a seperate log device.. on spinning platters it might give a speed bump on large sequential writes but be REALLY BAD for small random sync writes.. but for SSD might be the way to go.. still looking at but for now stripped SSD is plenty for my VMs as they are not write intensive.. vcenter probably does the most
So for your setup it should work
I would put your VM SSD disks on the HBA controller and keep data there too with SATA via RDM as a second tier solution
Get lots of ram... get lots of ram...
You need a backup solution.. ZFS is not a BACKUP... let me say that again.. you need a backup..
What little performance hit you MAY take on having the SSD VM storage on the all in one vs running it directly connected to esxi via the raid on you lsi card is more than made up for in simplicity of use... hardware raid is flakey, has silent corruption via the write hole, and it esxi is hard to backup without 3rd party software...
Keep in mind that most esxi installations get their data from external data stores over a cable of some type.. Ethernet/fiberchannel etc.. from a seperate filer ... a esxi all in one is getting its data from a internal - virtual - network connection that has far less latency and more pipe than all but the most expensive 10/40GB networking gear... and far more reliable especially if its only one host and not a cluster.. if you need a cluster.. then perhaps having a seperate shared resource makes sense.. but single host all in one really reduces failure points
Running in esxi protects the data FAR better than hardware raid.. PERIOD and that is not disputed .. its far supperior to hardware raid in every aspect. For example in hardware raid your 6x6 TB drives would take weeks to rebuild in a hardware raid if they were 5% filled in zfs it only rebuilds DATA not BLOCKS so it would take minutes
Zfs is also far more tolerant of disk/cabling issues ... where a hardware raid would fault the array on a hiccup..
Some issues I am having with napp-it / Omni as my zfs provider
I am runnning without a napp-it license .. so I think I am loosing ZFS ESXI auto/hot snaps which in this case would be nice... ACL and permissions are a pain without having access to napp-its gui. Monitoring is virtually non existent without license. Omni solaris based zfs lacks xattr=sa .. which embeds xattr as part of the file which i know OSX preferres and most other modern os's and I think that is really slowing things down and possibly causing me other OS X compatibility issues on my SMB shares. SMB is only 2.1 on Omni while the rest of the world is running 3+.. again a slowdown.
For my use case I am thinking that freenas being based on FreeBSD (which is closer to OS X) would be a bit better in the compatibility department and have all the features I am lacking without needing to buy a license.. its a hobby machine so I dont have the cash to fork out every year for a license..
Or take the overhead and run zfs on OS X like I have for years on my non esxi server that this replaces and use OS X for 90% of my zfs data and use napp-it free only for the vm pools... but then I would need to tear down and restructure the esxi box and this has taken long enough to setup.. so testing continues ..
I have been running zfs for 8+ years and I really cant find a better solution.. zfs still does more and does it better than the competition....
ZFS however is VERY powerful and thus very COMPLEX.. its not a set it forget it.. if you want to get everything out of it..that is why SUN Solaris storage engineers get paid big consulting bucks to setup enterprise but for our use cases .. just knowing the basics is usually good enough unless you make some serious blunders...
6 drives in raidz2 will only be as fast as the slowest drive.. its really visible as a single 'drive' for iops.. vs say doing a stripe of 2 3 drive raidz. You get the same capacity.. but now its got twice the iops as its seen as 2 devices and your rebuild with be faster.
There are lots of good sites on zfs .. if you have never used zfs before its really best to read first... build later.. many thinks with a zfs pool are set in stone at creation time and cant be changed... and a poor setup will hobble you