newbie questions

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

vjeko

Member
Sep 3, 2015
73
2
8
63
A lot of this is new to me, so I may be doing something stupid :)
I have an AIO with disks as per attached screenshot from napp-it:
The disks should be setup as follows:
1 ssd local SATA disk for ESXI+OmniOS ,
the rest of the disks tied to OmniOS via a LSI2008:
2*1TB hdd's (mirror) - for backup
2*120GB Intel S3500 ssd's (mirror) for vm's
1*240GB Samsung 850 pro ssd for vm

I initially had an old version of OmniOS for a while with the disks setup as above
and just 2 vm's on the 240GB ssd. I then gave the command export (didn't do any
saving of settings in napp-it), installed OpenIndiana instead of OmniOS (didn't
do anything about the disks etc.) and then installed newest version of OmniOS
with napp-it instead.

My questions:
(a)what is the correct way to get the new version of OmniOS to see the already configured
mirrored disks / pools/filesystems etc. ?
(b)I am now hearing a dum-dum sound from the disks every 30seconds or so, what is this/how
to stop it ?
(c)What is the c1t0d0 (NECVMware) removed disk / what should I do about this ?
 

Attachments

Last edited:

dragonme

Active Member
Apr 12, 2016
282
25
28
I would not waste the space for esxi on the ssd.. if you have a usb header just boot esxi of the usb. or internal cf card whatever.. once loaded its in memory... so a fast disk buys you nothing..
 

gea

Well-Known Member
Dec 31, 2010
3,141
1,184
113
DE
a.
just import the pools with menu Pools > Import

b.
Propably a background task like monitoring or acceleration
You can disable them in the topmenu right of logout

c.
This is your CD drive.
Menu Disks is based on the output of iostat that remembers all disks and iostat errors from bootup time
 
Last edited:
  • Like
Reactions: vjeko

dragonme

Active Member
Apr 12, 2016
282
25
28
@gea

his question 2 brings up something I have been meaning to ask

I hava a pool with datasets that are only shared via napp-it kernel smb to a linux VM.. when idle and no activity taking place.. it will not allow the disks to spin down fully, the smart for those drives racked up thousands of wakes in just a couple days..

so what is polling the drives.. omnios, smb, the linux smb, napp-it? I use the free version so there should be no active monitoring or acceleration?

to keep the drives from being thrashed I have a workload that is consently writing to that pool now ( security cameras) but would prefer not to as these are just large 8TB red drives for bulk media storage.. and not rated for 24/7 write

Thanks
 

gea

Well-Known Member
Dec 31, 2010
3,141
1,184
113
DE
With napp-it there are acceleration, monitoring or the alert job that access a pool.

On Solarish there may be a smartmontool daemon or the fault management service fmd

There may be also a client action that access a pool
 

vjeko

Member
Sep 3, 2015
73
2
8
63
gea,
I haven't gotten my head around ZIL/l2arc, so I dug around the old threads
and found this:
https://forums.servethehome.com/ind...r-noob-questions-esxi-omni-nappit.8117/page-2
where you indicated "create a virtual disk on the local datastore and use for l2arc".
As I'd like to wrap up my little AIO, I'd like to make sure I have what is needed
for data integrity (speed is of second priority - it is for home use after all),
so I'm wandering :
(1)Should I create the l2arc (I presume the local datastore refers to the local SATA
datastore where OmniOS is located) how large, how do you activate it ;) ?
(2)Do I need a ZIL device, how big, how to activate it ?

Note:I updated the list of disks above with model numbers (the SSDs will probably need
to be bigger later) in case use of disks with power loss capacitor impact answers.
 

gea

Well-Known Member
Dec 31, 2010
3,141
1,184
113
DE
L2Arc is a slower addition to the fast rambased readcache.
Unless you want read ahead, RAM is the key factor for performance. With enough RAM L2Arc is useless and with less RAM add more RAM.

Open-ZFS use a rambased writecache (10% RAM, max 4GB). On a crash its content is lost. While this does not affect ZFS consistency (due CopyOnWrite) in some cases you cannot accept this dataloss (databases, VMs).

If you need a secure write behavour without this dataloss you can use ZFS sync write. This activates a logging of all single writes commands to the pool, similar to a BBU on hardwareraid to make them crash resistent. With slow disks this can reduce your write performance to a fraction of its value without sync write.

If you add a Slog to your pool (Menu Pool > Extend) this logging is done on this device. With a lower latency and higher write iops than the pool you reduce the performance degration. A Slog device like a Intel Optane (900P or better) limits this degration to an absolute minimum. Per default the size of the Slog on Solaris should be at least 2 X the amount of date that is delivered in 5s. On Open-ZFS ex OmniOS, a free Solaris fork based on Illumos you need around 8GB.
 
Last edited:
  • Like
Reactions: vjeko

vjeko

Member
Sep 3, 2015
73
2
8
63
gea,
It's a small AIO using a Lenovo TS140 server / no Optane technology.
The server can have max 32GB memory but has only 20GB now.

(1)OmniOS is given 3GB now. From what you said, I conclude it's wiser to focus on amount of
memory for ZFS instead of adding an l2arc device. Should I increase the 3GB, how do I
know how much should be added and is there any setting that needs to be made besides
increasing the memory ?

(2)Is one Slog device used for all pools (magnetic disks + SSDs) , mirrored/non-mirrored disks,
for disks with (eg Intel S3500) /without power loss protection ? I presume an S3700 100GB
be best option for me.
 

vjeko

Member
Sep 3, 2015
73
2
8
63
Didn't quite understand this "You do not need an Slog for SSD only pools performancewise" - OK, so Slog is only for magnetics, what do you mean by "only pools performancewise" ?

I unfortunately dived into the AIO build too quickly with too little knowledge
and now see that I will quickly outgrow the TS140 as far as my expectations
are concerned, but I'm stuck with it and I don't have the possibility
of adding 900P due to limitation in number of pcie slots etc. (and cost ;)),
so a 100GB S3700 I guess is my only solution.
 

gea

Well-Known Member
Dec 31, 2010
3,141
1,184
113
DE
You can use a Slog with SSD pools to add for example powerloss protection.

Performancewise your Slog must be faster (lower latency/ more write iops) than the combined power of a whole pool. Your SSDs must be quite bad and an Slog must be very very good if this must be true.

S3700 is ok if you can get one cheap (used).
 
  • Like
Reactions: vjeko

vjeko

Member
Sep 3, 2015
73
2
8
63
gea, I've decided to get a Slog for the hdd pool (powerloss protection being
the main reason , performance secondary) but without experience/full understanding there's still
something unclear on what I should do about the SSD pool.

Before my questions about that issue, I would like to know whether the ideal way of using the
pools is to store only the vm on the SSD pool (i.e. SSD is used mainly for reading - I can imagine
software written once and only settings and logs being re-written) and the user data stored on the hdd pool ?
If so, would a Slog be warranted for the SSD pool anyway ?

If the SSD's in an SSD pool have powerloss protection inbuilt eg intel S3500/S3700 or newer,
then I presume a Slog will not give any further powerloss protection ?

I can't see any cheap used s3700/s3710 200GB, new ones are approx 240Euro).
 

dragonme

Active Member
Apr 12, 2016
282
25
28
there are several reasons you would still want a slog device/devices for a ssd only pool

several of the reasons is because zfs is not optimized for ssd. zfs doesn't care or know that your are using ssd and for sync writes will still use the behavior (without log device) of writing the intent zil to the ssd first.. then write the data again. this behavior is needed for spinners but is a complete waste for ssd, which could just do the write and skip the zil write since head alignment and latency isn't a thing for ssd

here are some other reasons though

main pool of ssds don't have power loss protection and the zil does, or you want to buy cheaper read optimized ssd for the pool and a write optimized ssd (think optain) for speed.

if you want to limit media wear of the main pool ssd, than on over provisioned write optimized zil buys additional life on the main pool
additionally, a slog will reduce fragmentation on the ssd, which speed wise might not be that big an impact however some have argued over small writes less than native block size.. bla bla.. but it is probably a thing.. just really down in the weeds.
 
  • Like
Reactions: vjeko

gea

Well-Known Member
Dec 31, 2010
3,141
1,184
113
DE
An Slog SSD/ NVMe for a disk based pools is always good for sync write performance.

For an SSD pool an Slog can make sense either when your pool SSDs lack Powerloss Protection or when your Slog is (much) faster than the combined write performance of the pool regarding small random writes. For the last you need a very fast Slog (Optane) and a quite slow SSD pool. Often this is not the case as a whole SSD pool is fast enough or as fast as a single Slog.

I would describe the write behaviour of ZFS (does not matter if disk or SSD) like

- every write goes to the rambased writecache ex 4GB and is committed to disk
as a large sequential write when full (OmniOS) or after 5s (Solaris)

- on a power outage or crash, the cache content (they are already commited to a writing application) is lost.
As ZFS is CopyOnWrite the filesystem consistency is not affected.

- if you want to preserve the cache content you can enable sync write
In this case all writes go the the ramcache as usual but additionally every committed single write action
is logged to the onpool ZIL logging device. While a ZIL is optimized for small random writes, it is slow on a disk pool.
If you use a ZIL to protect your ramcache, the pooldisks must offer Powerloss Protection (disks offer this, most SSDs not)

- If you want to improve sync write performance, you can add an Slog to the pool .
An Slog does not work additionally to the ZIL but replaces it to be faster.
An Slog must offer PLP, low latency and many write iops with small blocks under steady load (must be really fast)
 
  • Like
Reactions: vjeko