Looking to update OmniOS/NAPP-IT from r151014

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

dragonme

Active Member
Apr 12, 2016
282
25
28
gea...

speaking of interrupts.. this is interesting...

on this rebuild using 151022.. updated

I get gigabit wire lan max speeds from napp-it SMB into a mac os host.. over 100MB/s typical.. and napp-it shows about 10k interrupts
the transfer is pretty flat and not spikey.. I would call it excellent sustained transfers over 5GB files etc..

when I take a large file or set of files and SEND them into napp-it... I get between half and 3/4 speed.. call it on avg 55MB/s to 70MB/s but interrupts are off the chart like 25,000 or more.. so the ominos/napp-it VM is generating roughly double the interrupts for roughly 1/2 to 2/3 the speed?

I am using the standard image, and the SMB connection is through the physical intel 1GB port on the intel motherboard, and the napp-it VM using the standard E1000 interface. I used you standard tuning which I believe only tunes the vmxnet3 interface for TSO .. should I need to adjust the E1000 similarly ... I am no DTRACE wizard so I have been unable to zero into which process is the culprit

also, the disks at both ends are capable of sustained writing well in excess of 300MBs so its not that.. and napp-it graphs only show ~30-40% wait or busy on the pool during the write. NAPP-IT does fill about 8GB of the 10GB the VM has allocated to it during the write to the box, as well as during download ops.. it seems to release it right away.. actually memory use seem pretty stable on this release.

napp-it is given 4 vCPU and 10GB memory.
during heavy up/down link activity... cpu seems capped right at 50%.. I am guessing whatever is the bottle neck is single threaded?

thoughts?
 

gea

Well-Known Member
Dec 31, 2010
3,141
1,182
113
DE
- I would switch to vmxnet3 first (much less cpu load than e1000).
- If you can, assign more RAM (for ZFS read/write caching)
- Reduce number of vCPUs
Start with 1 vCPU and increase after a performance test.
 

dragonme

Active Member
Apr 12, 2016
282
25
28
- I would switch to vmxnet3 first (much less cpu load than e1000).
- If you can, assign more RAM (for ZFS read/write caching)
- Reduce number of vCPUs
Start with 1 vCPU and increase after a performance test.

thanks gea.. will give it a shot..

i tried going to 2 vmxnet3 ports a couple years ago and it didn't work well.. very spikey but will try again..

as for cpu.. these l5640 while power efficient are only like 2.4ghz so single threaded stuff might be getting held back.. didn't think of reducing core count.. I figured running one pool on 2xSSD for VMs on a lsi and a 3x8tb pool for media data hanging off the board ICH10 I would need to give nappit enough threads to handle multiple nfs and smb requests.. but will try 1 cpu to see what happens

I am thinking of raising the napp-it VM to High latency and reserving its CPU.. it already has its memory reserved due to the passthrough.. but reserving the CPU and telling esxi its a latency sensitive VM should reduce the system interrupts and dedicate more resources... on a box with 24vcpu and only 4 VMs running .. it should be easily doable from a cpu resource standpoint as I am nowhere near 1:1 subscription on cpu yet let alone over subscribed.. even with napp-it getting thrashed with 30k interrupts .. vm wait is barely off the peg... so its running when it wants for the most part already.. just a bit of latency due to the interrupts..
 

NOTORIOUS VR

Member
Nov 24, 2015
78
7
8
43
I cannot confirm any problems with OmniOS 151028.

So I would -
- try another OmniOS (ex my ova template v4 if you use an older one, v4 comes with newest fixes)

If this does not help
- install OmniOS from iso and check with e1000
- add vmware tools (pkg install open-vm-tools) and try vmxnet3s
Hi Gea,

sorry but I am at a loss on why I have such performance issues with napp-it/omnios, on my old server and on my newest server, with new drives, etc.

I have tonight made a quick test w/ latest v4 OVA on fresh install of 6.7U1 on my new server, 8x 8TB WD drives in an SA120 set up in mirror 4x Z2 and I simply cannot get any sort of stable performance out of it. 45 GB of RAM assigned to napp-it.

writing to the server via 10GBe VM it's all over the place, from 90-180 MB/s, read performance is poor as well at 180-200 MB/s, unless I immediately attempt to read again and then I see ~350 MB/s I suppose what is in RAM cache is used and then speed drops back to usual 180-200 MB/s.

napp-it optimizations do not seem to change anything for me either.

I then tried a few other ZFS configurations with my disks... 8x Z2 array, and then some mirrored stripes, 3x Z1 mirror.

Here is a link to the benchmark tests from all the configs here: test benchmarts.txt

Conversely the same setup under FreeNAS VM performs much better, mirror 4x Z2, with just 8 GB of ram to the VM, I get a very consistent 170-180 MB/s write to the disks, and 350-360 MB/s read all the time.

Then I tried the 3x Z1 mirror on FreeNAS and have it also 45 GB RAM... copy from local 10GBe VM to FreeNAS 240-260 MB/s, read from FreeNAS to local VM 820 MB/s, copy from remote 10GBe VM (on my old ESXi server) 550 MB/s.

Copy to my desktop (1GBe) from FreeNAS solid 110-113 MB/s. Write to FreeNAS from my desktop is just as solid 112-114 MB/s. So also none of the strange speed issue I see with my desktop and the latest napp-it/omnios.

I'm out of steam for tonight for testing, but I will still re-test the above with napp-it 151014 and I assume I will again see better speed than the latest version(s).

Thank you
 

gea

Well-Known Member
Dec 31, 2010
3,141
1,182
113
DE
Pool and NFS performance should not differ too much between OmniOS 151014 and 028. Main difference is SMB1 vs SMB2 and newer drivers for newer hardware and some newer ZFS features. If 151014 is much faster locally, hmm - what firmware is on your LSI HBA and have you compared Sata/AHCI performance vs HBA?

From your benches read and write values are slower than expected.
Your values indicate a sequential write performance of around 80 MB/s per disk - it should be much higher.
Your read values, especially the random read are also bad. Your sync write values are as expected.

Have you tried a different vCPU settings (ex 1 vCPU as a start) and optimized the storage VM for latency?

Performance from/to Window is a different aspect to local pool performance as SAMBA vs kernelbased SMB server and Free-BSD vs Solaris can have their own compatibility problems.
 

NOTORIOUS VR

Member
Nov 24, 2015
78
7
8
43
Pool and NFS performance should not differ too much between OmniOS 151014 and 028. Main difference is SMB1 vs SMB2 and newer drivers for newer hardware and some newer ZFS features. If 151014 is much faster locally, hmm - what firmware is on your LSI HBA and have you compared Sata/AHCI performance vs HBA?

From your benches read and write values are slower than expected.
Your values indicate a sequential write performance of around 80 MB/s per disk - it should be much higher.
Your read values, especially the random read are also bad. Your sync write values are as expected.

Have you tried a different vCPU settings (ex 1 vCPU as a start) and optimized the storage VM for latency?

Performance from/to Window is a different aspect to local pool performance as SAMBA vs kernelbased SMB server and Free-BSD vs Solaris can have their own compatibility problems.
Hi Gea, as always thanks for replying!

I agree, the speeds I am seeing on my new server a much slower than my old server with 10x WD 4GB drives... that did stand out to me and found that quite odd. Everything about the new system "should be" better. It's newer enterprise hardware in every way.

I would have to double check for the FW on the HBA, It as what I pulled from a few sources, I suppose I can also attempt to see the results with one of my PCI-E based HBA's in my old server see if that makes a difference vs using the HBA on my Mobo (but based on the speed of FreeNAS I'm guessing it's not hardware).

Re. VM optimizations, i haven't really made any changes to the settings as defined by your OVA. 2 CPU, etc. I just gave it more ram (and reserved it), and added the HBA. I didn't see any VM specific things for optimization but I will poke around.

I'll see if I can do some more digging this weekend with various tests etc. Would be nice to be able to at least match the speed and stability I'm seeing in FreeNAS but in napp-it. I'm pretty much ready to complete the server build (just waiting on 2 more 8 TB drives so I can make my 10x Z2 pool).

Gea, is there some testing I can perform under FreeNAS that we can compare disk performance benchmarks like you have on napp-it?

I suppose I will never get anywhere unless I understand if this is a disk/controller/system/memory/network issue.
 
Last edited:

NOTORIOUS VR

Member
Nov 24, 2015
78
7
8
43
Pool and NFS performance should not differ too much between OmniOS 151014 and 028. Main difference is SMB1 vs SMB2 and newer drivers for newer hardware and some newer ZFS features. If 151014 is much faster locally, hmm - what firmware is on your LSI HBA and have you compared Sata/AHCI performance vs HBA?

From your benches read and write values are slower than expected.
Your values indicate a sequential write performance of around 80 MB/s per disk - it should be much higher.
Your read values, especially the random read are also bad. Your sync write values are as expected.

Have you tried a different vCPU settings (ex 1 vCPU as a start) and optimized the storage VM for latency?

Performance from/to Window is a different aspect to local pool performance as SAMBA vs kernelbased SMB server and Free-BSD vs Solaris can have their own compatibility problems.
Hi Gea,

I've run a few more tests... changing vCPU doesn't seem to make a difference. RAM does make a big difference, even if running all the tests w/ Pri/Sec. cache set to none (raw disk test). Currently allocated 30 GB to the new server's VM and 24 GB to the old servers VM. Both running the latest v4 OVA.

I have tested the following using 2x 1TB MX500 SSDs (both brand new) below...

Old system, added both drives to the SAS breakout cables, added them as a MIRROR and ran the benchmark test.

On the new server, I passed the MB's SATA controllers to napp-it and the results were horrible even on the two SATA 3.0 connectors it wasn't pretty.

On the SAS card, throught he SA120 the numbers are decent, they seem a little better with a direct connection to the SAS card with breakout cables by about 20-30 MB/s but that's not a deal breaker for me.

I've put all the tests into this file and labeled each test to know what is what. Would love to hear your thoughts.

Link: MAR16-bench-2xSSD.txt


I don't know if it's worth it to pull one of the CPU's and half of the RAM and see if that changes anything? I'm hoping there is no fault with my hardware but it is all used, and I think the SSD results show nothing really wrong with the hardware. But again I have nothing to bench it against except my old system.

So I'm wondering why my speeds are so bad when using the spindle drives still.

EDIT... so looks like 4x Z1 mirror is doing it's job well! I guess I should reserve my final judgement until I get my remaining 2 8TB drives and then I can do a direct test of 10x 8TB in Z2 vs my old server (10x 4TB in Z2).

Link to bench: MAR16-spindleSDD-bench.txt
 
Last edited:

NOTORIOUS VR

Member
Nov 24, 2015
78
7
8
43
@gea

Well, finally have all my disks and ran some various benchmarks with different configs/options and I'm happy with the performance considering this array (10x 8TB in Z2) will be only storing & serving media.

I suppose lesson learned here is disk configuration makes a pretty large difference when determining throughput.

I'm going to see if there are any other optimizations I can manage be it network settings, NFS, etc. And then start to copy my data over to prepare to the migration to the new server.

Thanks for all the help and hand holding by the way... it's not always easy to deal us rookies I'm sure :p

link to benchmarks: MAR23-ZFS-benchmarks.txt

and some visual candy copying from a Win2k16 VM to/from napp-it:

Read to VM - ~630 MB/s dropping to ~350 MB/s:


Write to napp-it from VM - pretty steady 360-380 MB/s:


One last thing Gea, do you know of a way to get the command sg_ses working (or whatever the Solarish equivalent is) so I can control the fan speed on the SA120 via the HBA?
 
Last edited:

DedoBOT

Member
Dec 24, 2018
44
13
8
Is there a changes of i/o, other disk operations and logical block sizes of EXSI versions ?
 

gea

Well-Known Member
Dec 31, 2010
3,141
1,182
113
DE
I'm having the same issue as before... it can't find the package?


Is this Solaris 11.4?
I got:

Code:
root@solaris114:~# pkg install sg3_utils
           Packages to install:  1
       Create boot environment: No
Create backup boot environment: No

DOWNLOAD                                PKGS         FILES    XFER (MB)   SPEED
Completed                                1/1       130/130      1.0/1.0  758k/s

PHASE                                          ITEMS
Installing new actions                       153/153
Updating package state database                 Done
Updating package cache                           0/0
Updating image state                            Done
Creating fast lookup database                   Done
Updating package cache                           1/1
root@solaris114:~#
 

gea

Well-Known Member
Dec 31, 2010
3,141
1,182
113
DE
sg3 utils is only in the Solaris repo but the binaris should run in OmniOS
 

NOTORIOUS VR

Member
Nov 24, 2015
78
7
8
43
sg3 utils is only in the Solaris repo but the binaris should run in OmniOS
I've searched for a few hours on how to compile it all on OmniOS but I'm probably just not that smart.

any sites you can point me to that would help me get that done?
 

NOTORIOUS VR

Member
Nov 24, 2015
78
7
8
43
Just install Solaris and copy utils over.
Sorry if I seem thick.. I just don't quite follow.

Do you mean I should install Solaris (say on a new VM), install the utils on it and then extract the files and copy to my OmniOS VM?

EDIT: I cannot seem to install it on Solaris either...

 
Last edited:

gea

Well-Known Member
Dec 31, 2010
3,141
1,182
113
DE
Is this a regular Solaris 11.4 without modifications on the repository settings?
Mine looks like
 

Attachments