newbie napp-in-one questions

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

vjeko

Member
Sep 3, 2015
73
2
8
63
My hardware:
TS140 server 4core,20GB mem,(integrated I217 nic for admin), 16GB USB stick for ESXI 250GB SSD SATA local, IBM1015 reflashed to LSI 9211-8i HBA with 2 * 1TB SATA HDDs connected (want to have them mirrored), I350-t4 quad nic.

I installed ESXI 6.0 update 1 on USB stick (integrated nic and I350-t4 were recognised/ didn't add any drivers) and viclient on aWin XP pc for admin.

I have setup passthrough for the HBA and that's as far as I've got - still quite in the dark about a lot of things.

I will be using this setup basically as a learning workstation (virtualization/software development /OS learning) and for storage.

According to the napp-it documents, I saw two scenarios for ESXI and OmniOS installation
either ESXI on USB/OmniOS on SSD or
ESXI on SSD (datastore) / OmniOS on virtual disk on same datastore for a mirrored boot disk solution

Q1: What's the best setup for ESXI and OmniOS if only 1 SSD + 1 USB are available and what needs to be done (backup etc. and how) to ensure recovery to new disk(s) can be done easily?

Q2: For the OmniOS installation, I was planning to use
napp-it_15d_ova_for_ESXi_5.5u2-6.0. - is this OK since it mentiones ESXI5.5 - the tools version ?

Q3:I've connected the 2*1TB HDDS to the first SAS connector on the HBA - is that right/does it matter for mirroring ? If in the future I want to add 2 * SSD to the HBA (to have fast VMs - I would use the HDDs for backup then) and have them mirrored also, can I just connect them to the second SAS connector ?

Q4: All disks are new - do I need to do any sort of formatting beforehand ?

Q5: What is the min/ideal cpu/mem allocation for the OmniOS VM -still 2cpu ,6GB and can this be changed later - do you just stop the vm for which you want to make the chenge ?
 

whitey

Moderator
Jun 30, 2014
2,766
868
113
41
High level (feeling lazy today and you already have some of the work done)

-Attach up HW to HBA to your preference (w/ mentioned HW I would shoot for all on M1015, plan for a pool of ssd's (up to 4) and a pool of magnetics (again up to 4) to start...baby steps, mirror the magnetics of course as use for capacity/backup pool in case non-redundant VM ssd store snafu, you really should have another small SSD for napp-it VM storage/placement that is local (onboard sata connected). That way you can pass thru all stg devices to your AIO and not lose a ssd to storing your napp-aio VM. My 2cents. Cheap Intel s3500 80gb will suffice. $40-60
-Install ESXi however you prefer on target HW (looks like you have it on a bootable usb drive)
-DL napp-it vmware based appliance here: napp-it // webbased ZFS NAS/SAN appliance for OmniOS, OpenIndiana, Solaris and Linux : Downloads (there's a bit of RTFM goodies here but Gea does most of the heavy lifting for us) Deploy to vSphere infra via unzip/upload/ovf method (I forget what it is lol)
-Configure vt-d for HBA to napp-it AIO VM (requires a memory reservation)
-Boot AIO napp-it VM, configure OS level goodies (again w/in napp-it docs/site referenced above), configure ZFS pools/shares/exports for clustered/shared NFS/iSCSI datastores...configure snapshots, zfs send/recv replication to magnetic capacity/backup pool (local is free, between remote napp-it NAS's will cost ya...
-Profit

EDIT: I see you intend to use the ssd for napp-it AIO VM placement, assuming onboard sata, those VM's are gonna crawl only on magnetic disks, you'll be able to run 3-5 VM's if that before things feel REALLY slow...May wanna think abt a ZIL device for that magnetic pool.
 
Last edited:
  • Like
Reactions: epicurean

whitey

Moderator
Jun 30, 2014
2,766
868
113
41
My hardware:
TS140 server 4core,20GB mem,(integrated I217 nic for admin), 16GB USB stick for ESXI 250GB SSD SATA local, IBM1015 reflashed to LSI 9211-8i HBA with 2 * 1TB SATA HDDs connected (want to have them mirrored), I350-t4 quad nic.

I installed ESXI 6.0 update 1 on USB stick (integrated nic and I350-t4 were recognised/ didn't add any drivers) and viclient on aWin XP pc for admin.

I have setup passthrough for the HBA and that's as far as I've got - still quite in the dark about a lot of things.

I will be using this setup basically as a learning workstation (virtualization/software development /OS learning) and for storage.

According to the napp-it documents, I saw two scenarios for ESXI and OmniOS installation
either ESXI on USB/OmniOS on SSD or
ESXI on SSD (datastore) / OmniOS on virtual disk on same datastore for a mirrored boot disk solution

Q1: What's the best setup for ESXI and OmniOS if only 1 SSD + 1 USB are available and what needs to be done (backup etc. and how) to ensure recovery to new disk(s) can be done easily?

ZFS send/recv replication

Q2: For the OmniOS installation, I was planning to use
napp-it_15d_ova_for_ESXi_5.5u2-6.0. - is this OK since it mentiones ESXI5.5 - the tools version ?

That'll work

Q3:I've connected the 2*1TB HDDS to the first SAS connector on the HBA - is that right/does it matter for mirroring ? If in the future I want to add 2 * SSD to the HBA (to have fast VMs - I would use the HDDs for backup then) and have them mirrored also, can I just connect them to the second SAS connector ?

I'd prefer two pools but if you are on tight budget at least get the magnetic pool writes accelerated/sucked up by a good ZIL logging device (Intel DC s3700 100-200GB will be good for you over-provisioned)

Q4: All disks are new - do I need to do any sort of formatting beforehand ?

Nope, they gonna get owned by ZFS GPT style. :-D (worst case zpool create -f 'poolname' dev) to force pool creation on disks

Q5: What is the min/ideal cpu/mem allocation for the OmniOS VM -still 2cpu ,6GB and can this be changed later - do you just stop the vm for which you want to make the chenge ?

I used to be cheap and try to go 2vcpu, 4GB ram until I got bit by horrific read speeds at some point, seemed to be memory related. Happily running 2vcpu, 8GB ram now managing several pools totaling roughly 10TB of data FWIW.
 

epicurean

Active Member
Sep 29, 2014
785
80
28
Hi whitey,
Could you make some suggestions of best practices for configuring ZIL device for the hard drives, and also for regular automated snapshots?
 

whitey

Moderator
Jun 30, 2014
2,766
868
113
41
Sure thing, a single ZIL device can absord some 200-300MB/sec or log writes for a pool of slower magnetic devices making them 'feel' like a pool of 10's of disks performance-wise. A simple pool config for you could be something like this: (you'll have to interpret for napp-it GUI, this is cli)

zpool create vmwarepool mirror magenticdevice magneticdevice (2x 1TB sata's)
zpool add vmwarepool log ssdzildevice (s3700)

napp-it has automated snapshots/replication (send/recv) jobs you can setup w/in the GUI as well, would have to boot-up my napp-it appliance to show you config. Will post back pics in a bit.
 
Last edited:

whitey

Moderator
Jun 30, 2014
2,766
868
113
41
Question of the day, why can I never seem to figure out how in the hell to create/enable a NFS share from the napp-it GUI/UI? Is it acl related or hiding somewhere else on me? From cli I do it like this:

zfs create poolname/datasetname (of course this is easy in napp-it GUI but the rest eludes me to this day and I always have to drop back to cli)
zfs sharenfs=on poolname/datasetname
share -F nfs -o rw /poolname/datasetname
chmod -R 777 /poolname/datasetname
Mount in vSphere/other hypervisor/Linux box

Another minor napp-it annoyance for me (may be napp-it newbie related although I can bang on a cli ZFS distro in my sleep) but I cannot for the life of me figure out how to delete snapshots and have to resort to cli to nuke em', I can sure create snapshots from napp-it GUI/UI though...grrrr, sure I'm missing something simple.
 
Last edited:

whitey

Moderator
Jun 30, 2014
2,766
868
113
41
Sorry for the photobomb, you all know how I love my screenshots to really explain things well. Overall process shows from soup to nuts the process (may have missed a step or two for the sake of not making it 20 screenshots showing you the initial mount of src NFS from napp-it to vSphere, sVMotion from current stg to that NFS mount, and adding replicated NFS datastore to vSphere and adding replicated VM .vmx to vSphere inventory near the end)
 

Attachments

Last edited:

vjeko

Member
Sep 3, 2015
73
2
8
63
whitey- all the info and your time is much appreciated ! Lots of info, got some of it
need to study deeply as I go further but a couple more questions for my next baby steps:

OK, so I will continue with ESXI on USB (make duplicate in case of failure). The 2*1TB
magnetics mirrored on M1015 for capacity/backup pool.

Can I add one SSD (for fast VMs) to the M1015 if it is mirroring the magnetics (I was thinking of
using the present SSD (samsung 850pro 250GB - wasted space for the one AIO VM)
and buying the small SSD for the AIO (connected to onboard SATA)– or must all disks be mirrored?

You didn't comment regarding compatibilityof my installed ESXI 6 update 1 and the AIO
which has ESXI 5.5 tools (if I understand it right) - so I presume it is OK, and if not,
I guess it can be updated i.e. I can go ahead with this ?

Is the Vt-d memory/cpu allocation for the AIO 2cpu ,6GB min for my situation/ can I change it later ?

All this is very new to me (a leap from Windows and Trueimage imaging ;) ), so other than the
above ideas on the SSD for AIO (onboard SATA), SSDs for VMs and magnetics for backup
(all on M1015), is there anything I need to do regarding
configuring the disks (ZFS pools/shares/exports/datastores/snapshots/replication etc.) in order
to ensure I can reinstall the AIO or VMs in case of SSDs crashing ? - I think this is the first thing
I will need to setup.

Thanks in advance
 

gea

Well-Known Member
Dec 31, 2010
3,163
1,195
113
DE
Question of the day, why can I never seem to figure out how in the hell to create/enable a NFS share from the napp-it GUI/UI?
NFS (and SMB) shares are pure properties of a filesystem in Solarish.
You find file system properties under menu "ZFS filesystems".
Click there on off in the row of a filesystem under NFS and set to On
or to On to set it Off. The NFS service itself is enabled automatically.

Another minor napp-it annoyance for me (may be napp-it newbie related although I can bang on a cli ZFS distro in my sleep) but I cannot for the life of me figure out how to delete snapshots and have to resort to cli to nuke em', I can sure create snapshots from napp-it GUI/UI though...grrrr, sure I'm missing something simple.
Check menu snapshots where you can delete/destroy single snaps or you can click on a couple of them to delete. Check menu Snapshots > mass delete if you want to delete based on age or/and a string that must be in the name of a snap optionally with a and relation
 

gea

Well-Known Member
Dec 31, 2010
3,163
1,195
113
DE
Can I add one SSD (for fast VMs) to the M1015 if it is mirroring the magnetics (I was thinking of
using the present SSD (samsung 850pro 250GB - wasted space for the one AIO VM)
and buying the small SSD for the AIO (connected to onboard SATA)– or must all disks be mirrored?
Boot disk (for the storage VM) must not be mirrored.
Create a second fast mirrored SSD Pool for VMs

You didn't comment regarding compatibilityof my installed ESXI 6 update 1 and the AIO
which has ESXI 5.5 tools (if I understand it right) - so I presume it is OK, and if not,
I guess it can be updated i.e. I can go ahead with this ?
The downloadable napp-it ESXi VM comes with tools for ESXi6

Is the Vt-d memory/cpu allocation for the AIO 2cpu ,6GB min for my situation/ can I change it later ?
yes
 
  • Like
Reactions: Jeggs101

vjeko

Member
Sep 3, 2015
73
2
8
63
I guess my question about attaching an SSD to the M1015 was not clear.
8 disks can be connected to the M1015 directly. I will connect 2 magnetics and have
them mirroring. Now, the other connections that are left over- can I use only
one of them i.e. add just 1 SSD or must all disks be mirrored ?
 
Last edited:

whitey

Moderator
Jun 30, 2014
2,766
868
113
41
@gea THANKS! I was being silly and trying to hover down and NOT clicking ROOT menu, GOT IT! Keep up the amazing work good sir!
 

whitey

Moderator
Jun 30, 2014
2,766
868
113
41
I guess my question about attaching an SSD to the M1015 was not clear.
8 disks can be connected to the M1015 directly. I will connect 2 magnetics and have
them mirroring. Now, the other connections that are left over- can I use only
one of them i.e. add just 1 SSD or must all disks be mirrored ?
All disks do NOT have to be mirrored but it's a damn good idea to leverage a mirror or raidz protection schemes. As far as connections your M1015 simple provides two minisas 8087 ports and each port talks/supports up to 4 disks per port. You can hook up minisas to minisas or minisas to sata forward or reverse breakout cable depending on your applications/HW/chassis/backplane. Not gonna cover all scenerios here until I understand your chassis/backplane config.

Looks like your using a IBM TS140 tower so I assume no fancy/handy backplane but rather tower mounts and individual pwr/sata conn's to each device so you're probably looking at a minisas to forward (the more regular/commonly used one) breakout cable.
 

vjeko

Member
Sep 3, 2015
73
2
8
63
Thanks whitey. Yes TS140 tower & no backplane & I just got the sas-sata cables from ebay
- had to get 90deg sata ends and I couldn't find them here - good tower server
for learning - maybe long term for a NAS. The 1 SSD scenario is just temporary / don't
have the dough for the SSD's now. OK, I'll fire it up with what I have now - will surely
be back with questions ;)
 

vjeko

Member
Sep 3, 2015
73
2
8
63
OK, I'm back with more newbie questions.
The storage hardware is now as follows:
2* 16GB Sandisk USB for ESXI ( one is used / other duplicate is kept as spare)
2*DC S3500 120GB connected to local SATA (to be mirrored for OmniOS)
1 Samsung 850 256GB SSD & 2*1TB HDD connected to IBM1015 (want to run everything
mirrored but no money for another SSD at the moment).

Basic questions about setting up OmniOS / storage usage:
- Esxi is on USB - where should I store temporary data/logs etc
in my storage scenario ?
- Is it best to use the entire DC S3500 as one datastore and only for OmniOS?
- Any additional ideas on use of the IBM1015 attached storage for
running VMs and general storage/backup
 

gea

Well-Known Member
Dec 31, 2010
3,163
1,195
113
DE
A basic ESXi setup on ESXi is running in RAM.
Logs are stored on the local datastore with the VM.

If you think about the new local web-interface for Esxi
instead of the Windows app vsphere, you should install
ESXi to the SSD - together with the local datastore
(can be any SSD from 30 GB+, you may buy new).
This must not be mirrorred as there is nothing important on this disks

You then install OmniOS onto the local datstore.
You can mirror but there is nothing important on this disk.
You can reinstall the .ova template within minutes.

A mirror of two DC 3500 is best suited for a NFS share where you
can store some critical VMs

The Samsung Pro is perfect for your desktop
 

vjeko

Member
Sep 3, 2015
73
2
8
63
All my questions are aimed at trying to come up with
a setup which is simplest to recover from after an ESXi/OmniOS
disk crash.

As I don't have any expeience with either ESXi, OmniOS
local or HBA disk crashes/failure I've automatically thought
of the ideas above together with mirroring :
"ZFS mirror your OmniOS bootdisks" as described
in the napp-in-one.pdf and the
"All-In-One with SmartOS/OI on mirrorred ZFS bootdisks"
as a way to make the system more foolproof (for a newbie ;) )
but gea, your comments have thrown me a bit (seems like
recovery is easy without an uptodate state of the system
which is available via mirroring) – I would appreciate
a bit more information

(a)If ESXi boots from USB stick – what does it mean to have a duplicate
USB stick in case of failure – when should this duplicate USB be created
/ what changes or admin actions will modify ESXi
to such an extent that it will not work if an initial duplicate is used after a crash
– i.e. are there any actions/admin changes etc. that warrant updating the duplicate /
periodic backing up of ESXi?

(b)Why did you tie the need of an SSD for ESXi if using the new web
client (the new free web client interface) ?

(c)Regarding the ESXi logs – for my situation
of ESXi booting from USB (and no datastore having been available
at time of installation), the logs are in RAMdisk (by default) i.e.
wiped at each shutdown. The logs would ofcourse be useful
to be persistent – where should they be stored to be persistent
and easily available after a crash if using USB stick as boot device ?

(d)If for example, OmniOS is on the datastore on a local SSD / not mirorred.
Can you elaborate your statement about OmniOS
"You can reinstall the .ova template within minutes." - so OmniOS
will pickup the storage information from scratch ? (if yes, then I really
don't see the benefit of mirroring the OmniOS installation)

(e)Regarding the Samsung 850 pro – did you mean it wasn't good for this
server setup (good for a desktop) ?
I just looked on ebay & an SSD around 30GB and it's about the
same price as the DC S3500 which I got for about 60Euro
 

gea

Well-Known Member
Dec 31, 2010
3,163
1,195
113
DE
a
worst case, your USB stick with ESXi dies
If you have an up do date clone, you can use to bootup - no downtime
If you have an outdated stick, you must reimport your VMs, downtime up to 15 min
If you must reinstall, downtime is at least 30 min if you have the ISO and a new stick available

b
If you boot ESXi from an USB stick, boot time may be several minutes, but once ESXi is up, the stick is not needed anymore. With a local webserver on ESXi you have read/writes. A stick affects management performance and reliability.

c
what logs are you looking for?
Logs for the VMs are in the VM folder


d
worst case, your disk with the local datastore dies
You replace the disk, import it in ESXi as a local datastore.
Then you can reimport the napp-it VM via the .ova, add the HBA for pass-through, import the pool and share it via NFS. You may need to reimport your VMs. There is nothing important on OmniOS. Everything is in the pool. If you need an application server, use another VM on ZFS storage with snaps, not the storage VM.

downtime about 30 min
If ESXi was on the SSD, add at least 30 min.

e
The strongness of this Samsung is the desktop. The Intels are enterprise SSDs with powerloss protection. They are perfect to store VMs. You may use disks like the Samsung without powerloss protection but then you should add a ZIL SSD with powerloss protection if you want to be protected against a corrupt filesystem on a powerloss - important for production systems. For a home or lab setup you may ignore this.

But such ZIL or secure sync write considerations are a separate aspect
 
Last edited:

vjeko

Member
Sep 3, 2015
73
2
8
63
I get a feeling for what you wrote but will
understand it in detail only in case of unfortunate failure / simulating it.

c logs- it's the ESXi logs - now in RAM / being wiped at shutdown.

OK, so in order to avoid problems, I'll stick to Intel SSDs.

Now, in summary, to avoid problems with the web client, which I guess
is my best alternative for ESXi administration (as the vSphere client
has limited functionality), I need to put ESXi on a local SSD (this will also give
me persistent logs).
Is there a big difference in how easily one is able to bring up ESXi or OmniOS
in case of failure if both ESXi and OmniOS are put on the one local SSD or should I use
two (ESXi on one and OmniOS on the other), or is having both ESXi + OmniOS
on one SSD +mirroring then a better solution ?
 

gea

Well-Known Member
Dec 31, 2010
3,163
1,195
113
DE
I usually use an SSD for ESXi and OmniOS as a restore of both is done quite easily

Management of the free ESXi is usually done via vsphere on Windows.
You can manage all free options with current vsphere.

To manage all features, you can use the webclient with vcenter.
But this is not a free option.

The free and new local weboption on ESXi ist currently under development.
When its ready it may replace vsphere with the same features.