Do these benchmarks look normal for napp-it?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

nonyhaha

Member
Nov 18, 2018
50
12
8
Hello,
I have been using for some time napp-it and have recently moved away from server grade equipment to consumer grade, because I needed to lower power consumption of my server.
I have the following results while testing transfer rates between two vms on an esxi host:
(see attachments)
Disks, iozone are printscreens from the nappit gui.
iperf3 is a printscreen from a client running on windows server vm.
raidz1 hdd and raidz1 ssd are crystaldm screens from the same windows vm.
I have mapped shared drives through smb.

I am trying to figure out why my ssd pool has same results as the hdd pool. Both are raidz pools consisting of 4 physical devices.
Is this normal? If this is normal, am I assuming correctly that I would be better off with 4x2.5" 10krpm drives for the low size pool, which I am now using for a mysql database for home assistant, nextcloud for about 250gB of photos, and emby cache directory for "faster" loading times?
Also, I already have 30tbw on my ssds in 415 days :).

Thanks!
Noni.
 

Attachments

Last edited:

gea

Well-Known Member
Dec 31, 2010
3,141
1,182
113
DE
Without further details (CPU, RAM, vnic etc)

Iperf: 6-9 Gb/s is ok
IOzone is ok
CDM (SMB ) is ok

You should always check (empty pool, compress, enc, dedup: off, ZFS cache: on)
- local pool performance (I would suggest the test series on Pool > Benchmark that shows random io performance in sync mode)
- network via iperf
- file service ex SMB

Iozone shows that SSD are much faster, Pool > Benchmark should show differences with small io
where SSD are faster. Sequentially SSD and HD can be equal. If results are below expectations this helps to find weak points.

For tuning: increase tcp/vmxnet3 buffers, use a recsize >= 256k for media files, a small recsize 32-64k for databases/VMs, enable Jumbo
 
Last edited:
  • Like
Reactions: ecosse

nonyhaha

Member
Nov 18, 2018
50
12
8
Hi Gea, thanks for the usual fast reply.

1. I am now running on an i5 10400t :) 4 cores allocated to napp-it vm. 5 cores to the windows vm. 32gb ddr4 2666 allocated to napp-it, 20gb to windows.
Vms are stored on a separate nvme, directly allocated to esxi.
Vnics are all set to vmxnet3. All vms are on same portgrouo on a vswitch with 9000 mtu. Napp-it and windows have 9000 mtu enabled.
Tcp parameters on napp-it:
ipadm set-prop -p max_buf=4194304 tcp
ipadm set-prop -p recv_buf=1048576 tcp
ipadm set-prop -p send_buf=1048576 tcp

2. Empty pool test i can't do anymore, but hdd pool speeds are ok for my needs.
compress, dedup, encrypt are off on all my pools. Cache is on.

3. All record sizes are 128k. As in on both pools.
Indeed, test and expectations should be that ssd pool would be a lot faster with small io, but i do not see a great difference in the tests i posted, random read are about the same, and random writes have a difference, almost double for ssd pool.

Is it worth burning through ssds for such a small difference, or should i swap them for 4 2.5" 10k drives? Cheaper, more capacity?

Should I run different tests? Smaller file size?
 

gea

Well-Known Member
Dec 31, 2010
3,141
1,182
113
DE
You can expect no more then around 100 physical iops from a mechanical disks while desktop SSDs can give a few thousand iops and datacenter NVMe like Intel Optane 58x up to 500k iops continously on 4k write. While the advanced rambased ZFS caches allow good values from disk based pools on some workloads you see a huge advantage of SSDs on small io like sync write that you should enable for VMs or databases.

So the rule: For large capacity, a media or office filer, disks are good; for VMs or databases use SSDs. There is no room left for 10k disks. Not cheap or large enough and not as fast as SSDs that are quite cheap now.

Unless pool fillrate is below say 50% the performance degration is quite low. Up from say 70% performance suffers.

Run Pool > Benchmark (this menu, not a submenu) for details about small io with sync write.

btw:
random read=cache quality/performance
random write with sync enabled=quality of disk
 
Last edited:

nonyhaha

Member
Nov 18, 2018
50
12
8
For my fast needs, the pool made of ssds should be enough. Would i need to enable sync if using a live ups? The only thing i could think of going bad would be a component failing badly, like psu, motherboard, cpu, ram, controller that could affect the situation, correct?
 

gea

Well-Known Member
Dec 31, 2010
3,141
1,182
113
DE
A minimalistic OmniOS is very stable but even there you can have a kernel panic due ram or other hardware problems. For VM filesystems or databases there remains a risk that sync write does not have with the price of a lower write performance. The real problem is that after a crash you cannot really trust a VM or later backups. The risk is small and a decision security vs performance so you may decide against sync write. The same with SSDs without powerloss protection. A single SSD without plp cannot be trusted 100% so absolutely not an option for an Slog. In a ZFS raid, data corruptions on SSDs may be detected and repaired on reads. Also a decision security vs price.

Can you add the output of Pool > Benchmark about the performance lost of sync write of the disk and ssd pool

example: My low power server with an older Intel DC 3600 NVMe

pool_bench.png

345 MB/s sync write (multistream write) is more than enough for VMs
 
Last edited: