Benchmark numbers not quite what I'd expect. Thoughts?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

manxam

Active Member
Jul 25, 2015
234
50
28
I've been running OmniOS + Napp-IT for a few months now and have been happy for the most part. I just recently added a 3700 SSD as a volume and figured I'd benchmark the SSD vs my two spinning rust volumes. Upon completing all of the benchmarks I'm less than impressed by all of the results. The SSD pool is slower than expected and volume2 with slower drives (though 2 more) is quite a bit faster than volume1. Also, it appears I'm CPU bound on some instructions. Will adding another CPU, more ram, flashing H200(s) to IT mode, or anything else help speed these up?

Server specs:
1x XEON 5520
24GB ECC
2x Perc H200 (still in IR mode)
Volumes:
ssd1 : 1x 3700 SSD
volume1 : 6x (Z2) HGST NAS 7200rpm 4TB
volume2 : 8x (Z2) WD RED NAS 5900rpm 2TB

Code:
 NAME       SIZE       Bonnie       Date(y.m.d)       File       Seq-Wr-Chr       %CPU       Seq-Write       %CPU       Seq-Rewr       %CPU       Seq-Rd-Chr       %CPU       Seq-Read       %CPU       Rnd Seeks       %CPU       Files       Seq-Create       Rnd-Create
                                                                                                                                                           
ssd1       888G       start       2016.08.28       48G       309 MB/s       99       455 MB/s       59       199 MB/s       33       271 MB/s       99       340 MB/s       19       11424.2/s       19       16       +++++/s       29371/s 
volume1       21.8T       start       2016.08.28       48G       293 MB/s       96       461 MB/s       68       106 MB/s       23       228 MB/s       85       330 MB/s       23       840.2/s       2       16       +++++/s       30641/s 
volume2       14.5T       start       2016.08.29       48G       306 MB/s       97       539 MB/s       69       119 MB/s       26       243 MB/s       94       624 MB/s       53       906.6/s       2       16       +++++/s       30748/s
Thanks for any help or suggestions.
 

gea

Well-Known Member
Dec 31, 2010
3,157
1,195
113
DE
There are two points,
first you must calculate what you can expect, then you can think about improvements

An SSD like the Intel can deliver around 500MB/s read and 200-460 MB/s write depending on model but mainly with higher queue depth. These are also values on a new and empty disks with larger file size. Main advantage of these SSDs is iops.

Only a new or secure erased SSD runs at its best performance

Conventional disks, does not matter the rpm can deliver between 50MB/s and 200MB/s per disk on read and write. Value depends if you use inner or outer tracks, fragmentation and file size.
Only a newly created and empty pool runs at best performance.

If you use very small file/blocksizes for tests, values are far below.
A single bad disk in a Raid can ruin the performance, theck iostat if all disks perform equal.
Writes are always going over the ZFS write cache while reads are usually uncached with random benchmark data and filesize 2 x RAM so writes can be faster than reads (depends on benchmark)

Now check the results especially the sequential value write/read in MB/s
SSD 450/340
Volume1 460/330
Volume2 540/620

SSD gives around 450MB/s write.
This is as good as the largest and fastest ones of the S3700 can deliver
Read is lower than expected

With Raid Z2, sequential performance scale with number of datadisks while iops is always eqaual to a single disk. This is where you can see improvements with faster disks.

Your volume 1 with 4 x 7200rpm datadisks offers a write performance of 460 MB/s what means 115 MB/s per disk, this is as expected.
Read is 330 MB/s what means 82 MB/s, low but not outside of range

Your volume 2 with 6 x 5900 rpm datadisks offers a write performance of 540 MB/s what means 90 MB/s per disk, this is not outside of expectation.
Read is 620 MB/s what means around 100 MB/s, low but not outside of range

Your options to improve:
disable dedup, try compress enabled vs disabled

The firmware of the HBA can be critical.
Try another (IT) firmware but avoid the P20 lower than P20.004
 
Last edited:

manxam

Active Member
Jul 25, 2015
234
50
28
Thank you Gea for the very thorough write-up. De-duplication is off and LZ4 compress is on for all volumes. I suspect that the H200 firmware may be a limiting factor as it supports a very limited queue depth (32 with SATA) without crossflashing.

Considering the low read on all volumes, is this a common occurrence with queue depth or is there something else to look for that would affect all volumes?

Cheers,
M
 

gea

Well-Known Member
Dec 31, 2010
3,157
1,195
113
DE
You can only compare compress=off and try a different firmware.
You can also compare Sata/AHCI
 

manxam

Active Member
Jul 25, 2015
234
50
28
Thanks Gea, I'll start by upgrading my controller firmware and then proceed from there. I may have to purchase another 3700 ssd and raid 1 them for the speed I'm looking for there.

Can I take a standalone disk and create a mirror without wiping the existing data?
 

gea

Well-Known Member
Dec 31, 2010
3,157
1,195
113
DE
Yes, you can add disks to a basic or mirror vdev (menu disks > add) or remove them from a mirror. Data on the vdev is not affected.
 

manxam

Active Member
Jul 25, 2015
234
50
28
Hi Gea, I've gotten around to adding a second 3700 in a mirror to the first but, again, I'm not seeing the performance I'd expect. The first is the new mirroed pair, the second is the original 1 disk. There is 72gb of data on the drive(s):
Code:
ssd1       888G       start       2016.09.17       48G       313 MB/s       99       450 MB/s       61       274 MB/s       44       276 MB/s       99       658 MB/s       34       12327.4/s       20       16       +++++/s       30838/s 
ssd1     888G    start           2016.08.28      48G     309 MB/s        99      455 MB/s        59      199 MB/s        33      271 MB/s        99      340 MB/s        19      11424.2/s       19      16      +++++/s         29371/s
Any ideas?
 

gea

Well-Known Member
Dec 31, 2010
3,157
1,195
113
DE
Hi Gea, I've gotten around to adding a second 3700 in a mirror to the first but, again, I'm not seeing the performance I'd expect.
The question is, what did you expect?
A single ssd gives sequential write/read: 455/340 MB/s
The mirror gives sequential write/read: 450/658 MB/s

This is as expected.
On writes, you must write data to both disks. A write is completed when it is done on both what means that a mirror never can be faster on writes than a single disk.

On reads, ZFS can read from both disks simultaniously what means that reads can be up to twice as fast on a mirror.

This is what I see from your results
 

manxam

Active Member
Jul 25, 2015
234
50
28
Thank again Gea. I suppose I was hoping for too much but your explanation makes sense. I was merely curious because, aside from the sequential read, everything else stayed pretty much the same.

Still curious why the CPU appears to peg (according to the benchmark results) using Bonnie. I didn't think a single Xeon E5540 would be a hindrance in data transfer and wonder if the speeds would increase at all with additional CPU.

I still have to update the controller firmware from IR to IT mode this weekend and hopefully the additional queue depth will help my spinning rust numbers a little.

The reason for all of this? one stupid little thing.. I have folder redirection enabled on my home network (1gb) for easy transition between my laptop and desktop. Loading powershell on both takes 5-6 seconds as it parses the available modules on start. These are many small files. I was hoping that the additional IOPS from a paid of SSDs would make this process faster than it was while on the spinners--it didn't.

Oh well. Aside from needing to wipe an z2 dev to add 2 more disks to it for more storage, it's doing everything I need...

Again, thanks for being patient with me :)