Understanding Solaris 11 ZFS mirror read results

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

legen

Active Member
Mar 6, 2013
213
39
28
Sweden
I have problems with understanding these zfs read results, im using napp-it. To make the benchmarks go faster i use a solaris VM with only 2 GB ram, but i get the same results with 8 GB ram. The disks are old 500 GB samsung 7200 RPM drives. The drives are passed-through from a M1015 to the solaris VM.

Using a clean solaris 11 install, mirror setup, zpool iostat gives me the following during the dd write/read benchmark running from napp-it.

Code:
               capacity     operations    bandwidth
pool        alloc   free   read  write   read  write
-----------  -----  -----  -----  -----  -----  -----
tank         3.91G   460G  1.40K      0   178M      0
  mirror     3.91G   460G  1.40K      0   178M      0
    c5t13d1      -      -    803          0        134M    0
    c5t14d1      -      -    166         0        44.0M    0
-----------  -----  -----  -----  -----  -----  -----
The result:

Code:
Memory size: 2048 Megabytes

write 4.194304 GB via dd, please wait...
time dd if=/dev/zero of=/tank/dd.tst bs=2048000 count=2048

2048+0 records in
2048+0 records out

real     1:18.5
user        0.0
sys         2.1

4.194304 GB in 78.5s = 53.43 MB/s Write

wait 40 s
read 4.194304 GB via dd, please wait...
time dd if=/tank/dd.tst of=/dev/null bs=2048000

2048+0 records in
2048+0 records out

real     1:26.4
user        0.0
sys         2.3

4.194304 GB in 86.4s = 48.55 MB/s Read
Using a single disk and benchmarking it i get these results.

Code:
----------  -----  -----  -----  -----  -----  -----
tank        3.91G   460G    620      0   154M      0
  c5t13d1   3.91G   460G    620      0   154M      0
----------  -----  -----  -----  -----  -----  -----

Code:
write 4.194304 GB via dd, please wait...
time dd if=/dev/zero of=/tank/dd.tst bs=2048000 count=2048

2048+0 records in
2048+0 records out

real       55.1
user        0.0
sys         2.3

4.194304 GB in 55.1s = 76.12 MB/s Write

wait 20 s
read 4.194304 GB via dd, please wait...
time dd if=/tank/dd.tst of=/dev/null bs=2048000

2048+0 records in
2048+0 records out

real     1:36.2
user        0.0
sys         1.8

4.194304 GB in 96.2s = 43.60 MB/s Read
Questions
Why am i not seeing twice the read performance using a mirror setup? I have read that using a mirror i should get double the read performance since ZFS uses a round-robin technique to obtain the data. Using zpool iostat i see that one device is performing ~44 MB/s while the other is showing ~100 MB/s. Creating single pools of the devices and running tests i see identical performance.

I plan on buying SSDs to replace these old drives (they are currently for esxi datastore), but i feel i need to solve this issue first so i wont run into it later.:confused:

EDIT
If i use another solaris vm with the latest updates i am seeing different performance,

Code:
                capacity     operations    bandwidth
pool         alloc   free   read  write   read  write
-----------  -----  -----  -----  -----  -----  -----

tank         15.3G   449G    410      0  51.1M      0
  mirror     15.3G   449G    410      0  51.1M      0
    c5t13d1      -      -     32      0  24.5M      0
    c5t14d1      -      -     33      0  26.6M      0
In this case i get half the read performance from each disk, to a total of 50 MB/s according to zpool iostat. However i still get better dd results than using the other solaris machine for some reason.
 
Last edited:

legen

Active Member
Mar 6, 2013
213
39
28
Sweden
I flashed the M1015 to IT-firmware and now i get the expected result.

Code:
Memory size: 8000 Megabytes

write 16.384 GB via dd, please wait...
time dd if=/dev/zero of=/tank/dd.tst bs=2048000 count=8000

8000+0 records in
8000+0 records out

real     3:11.4
user        0.0
sys        11.3

16.384 GB in 191.4s = 85.60 MB/s Write
read 4.194304 GB via dd, please wait...
time dd if=/tank/dd.tst of=/dev/null bs=2048000

8000+0 records in
8000+0 records out

real    3m7.502s
user    0m0.015s
sys     0m7.204s


16,384 GB in 187,2s = 87,6 MB/s Read