Poor ZFS performance after omnios update

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

spazoid

Member
Apr 26, 2011
92
10
8
Copenhagen, Denmark
Hey everyone, need a bit of guidance on this one...

I wanted to migrate a zpool to a new VM running FreeNAS and in the process reconfigure the vdevs and luckily I had a few drives on hand to use as interim storage. My idea was to configure a zpool on the new VM, then zfs send/receive over LAN from Omnios to FreeNAS.

Well, longish story short: OpenSSH on FreeNAS didn't support the old SunSSH on Omnios, so I had to update Omnios and for some reason, the Omnios update just killed performance on the old zpool.
I've tried importing the old zpool on FreeNAS and upgrading the zpool, but to no avail.

This is the io while doing a send/receive:
Code:
[root@freenas] /mnt/tank/# zpool iostat tank tank2 5
               capacity     operations    bandwidth
pool        alloc   free   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
tank        7.48T  6.15T      1      0   205K      0
tank2        101G  8.03T      0     25      0   200K
----------  -----  -----  -----  -----  -----  -----
tank        7.48T  6.15T      0      0      0      0
tank2        101G  8.03T      0     25      0   199K
----------  -----  -----  -----  -----  -----  -----
tank        7.48T  6.15T      2      0   307K      0
tank2        101G  8.03T      0     25      0   302K
----------  -----  -----  -----  -----  -----  -----
tank        7.48T  6.15T      0      0      0      0
tank2        101G  8.03T      0     25      0   200K
----------  -----  -----  -----  -----  -----  -----
Scrubs can't even estimate a time to completion due to the speed.
One disk in the vdev does give S.M.A.R.T. errors (reallocated sectors), but taking the disk offline does not improve speeds.

I'm really just interested in getting the data off the pool and recreate it with a different configuration, but at less than 1 MB/s, it's probably going to be running for a while...

Any ideas?
 

gea

Well-Known Member
Dec 31, 2010
3,141
1,184
113
DE
There are no known problems with current OmniOS 151020 so I would
- run a performance benchmark like bonnie or filebench on OmniOS to decide if there is a pool problem
control iostat if all disks perform similar
- then run an iperf between OmniOS and FreeNAS do decide if you have a network problem

It should also be possible to import an OmniOS pool directly in FreeNAS
If encrypted SSH is the problem, try to add more CPU/RAM to the VMs or use mbuffer or netcat instead of the slow SSH
 

spazoid

Member
Apr 26, 2011
92
10
8
Copenhagen, Denmark
Hi Gea, thanks for your input!

It might not have been clear from my post, but at the moment I'm running a send/receive directly on the FreeNAS host between the two pools (the output is pasted in the original post).

Both pools are imported in FreeNAS. The issue is not with omnios, but rather the pool itself, but I'm having trouble finding the exact problem as the disks seems to be mostly idle, CPU usage is sub 10% and 8GB RAM has been allocated to the FreeNAS VM.