OmniOS Napp-it on ESXi - ZFS RAIDZ1 - extermely poor write/read performance

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

new2VM

New Member
Oct 13, 2013
16
0
1
System Configuration:
System release:
SunOS Home-Nappit 5.11 omnios-b281e50 i86pc i386 i86pc
ESXi 5.1.0 standard
Hardware: HP Proliant Microserver. AMD Athlon (tm) II Neo N36L Dual Core
Memory: 8 GB ECC Unbuffered
ZPool: 9 TB capacity

pool: Home_NAS
state: ONLINE
scan: none requested
config:

NAME STATE READ WRITE CKSUM CAP Product
Home_NAS ONLINE 0 0 0
raidz1-0 ONLINE 0 0 0
c2t1d0 ONLINE 0 0 0 3 TB WDC WD30EFRX-68A
c2t2d0 ONLINE 0 0 0 3 TB WDC WD30EFRX-68E
c2t3d0 ONLINE 0 0 0 3 TB WDC WD30EFRX-68E
c2t4d0 ONLINE 0 0 0 3 TB WDC WD30EFRX-68E​

Bonnie test result (test run from Napp-it GUI): below

I have not made any special configuration changes - just out of the box Napp-it appliance.

All hard disks are: WD Red 3TB WD30EFRX NAS Hard Drive 5400, 64MB Cache, RPM Class SATA 6 Gb/s Western Digital

I am not sure why I am getting so poor write performance (also read performance is not great either). Any advice please?

Thank you!
Bonnie.png


 

gea

Well-Known Member
Dec 31, 2010
3,136
1,182
113
DE
You can expect around 150 MByte/s sequential and maybe 70 MByte/s random performance from a single 5400 rpm disk without the help of caches. Sync write with very small blocks is in the area of a few MByte/s. In a Z1 Raid and a benchmark, load is quite random. Sequencially the overall performance of a Z1 is n x number of datadisks (3), random is equal one disk.

From your hardware, 207 MByte/s read (in a Z1 this means quite exact this 3 x 70 MByte/s) is not unexpected. 2 MByte/s write is very low but also expected with sync enabled or slow disks and low RAM and a raid Z1 with a low expected iops of around 60-80. You can disable sync on the pool to check the influence of sync, disable other VMs and set the max possible of 7 GB RAM for the storage VM. Probably it is ok as well then, then try for ex 4-6 GB to have some room for VMs.

On a newer hardware, you may compare io of all disks whether one is weak (all should have the same load). Then check amount of RAM assigned to the storage VM. Should be 4 GB at least, more is better/faster. Then reduce ZFS recsize for VM storage to say 32k.

In your case you can only think of 1 GB for ESXi 5, up to 4-7 GB for the storage VM and optionally up to 3 GB for possible other VMs (sum=max 8GB). Then you should not use vdisk or rdm of Sata disks but a dedicated HBA for the storage VM that gives a better performance.

In general I would say your hardware is not suited for a virtualized NAS concept. Solaris or the free Solaris fork OmniOS have the lowest resource needs and are best suited for a storage VM but still need some hardware. Your hardware is good for a barebone filer and 8GB RAM with sync disabled.

btw
- OmniOS is Unix and a Solaris fork, not Linux
- nice to see such old setups still working
 
Last edited: