ZFS on Linux perf tuning for all SSD storage guide?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

BLinux

cat lover server enthusiast
Jul 7, 2016
2,672
1,081
113
artofserver.com
i have a 22x 12Gbps SSD (HUSMM8080/40) storage system using direct attached backplane 216A and 3x PCI-E 3.0 x8 HBA controllers, but frankly, I'm not getting anywhere near the ballpark of the performance I was hoping to see. just simple sequential write/read with dd are slower than my 24x HDD ZFS raidz2 pools! the performance of any individual SSD is what i expect for 12Gbps SSDs, but put into ZFS pool (tried various raidz, stripe, mirror, etc) and some times the performance is worse than a single SSD! so, i'm wondering if there's a performance tuning guide somewhere for all SSD ZFS storage pools, specifically for ZFS on Linux (and in my case, using CentOS 7).

i tried a quick search on STH but didn't find anything... though, it came back with 0 results so it made me wonder if search isn't working?
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,519
5,826
113
Search is working but you are right, we need a guide like this.
 

vrod

Active Member
Jan 18, 2015
241
43
28
31
We need some more info on this. How is this exactly set up? You are mentioning RAIDZ2 but is this only one vdev or is it several vdevs?

ZFS will always have a little penalty with the COW, compression, dedup and so on. It all really depends on what you have enabled of functions and features. What OS as well? How much memory in the PC?

I once asked on reddit about a single NVMe pool, a guy gave some good suggestions: Considerung a couple single-nvme zfs pools for vm storage • r/zfs
 

Rand__

Well-Known Member
Mar 6, 2014
6,634
1,767
113
I ran similar tests with Intel S3700's previously (not on ZOL but ZFS)
https://forums.servethehome.com/index.php?threads/napp-it-not-scaling-well.17154/

And I have an older thread somewhere where I used a bunch of 12GBs SSDs with similar bad results (using the same box as the thread above).

Both attempts where not near the expected result level - in the end I think I was CPU limited, despite running on a (single) 2667v4...
Have you checked that on your end already?
 

EluRex

Active Member
Apr 28, 2015
218
78
28
Los Angeles, CA
I have running a zfs sata ssd pool with SM843T x 6 in raid 0 for a year. Recently I have to destroy the pool, take all the drive to a window machine to TRIM it before each drive read speed change from 120 mb/s back to 420 mb/s where it suppose to be.
.
using SSD on ZoL ZFS pool is not recommended without N-Trim being implemented.
 

BackupProphet

Well-Known Member
Jul 2, 2014
1,095
660
113
Stavanger, Norway
olavgg.com
I run ZFS on Linux with 7 800GB SSD's, mostly Micron 500DC, and a single Intel S3700. Configured as a single stripe, I get full speed for everything. I've been running ClickHouse OLAP Database, number crunching at 10 GB/s (because of LZ4 compression), read speed from drives is around 3.5GB/s
 
  • Like
Reactions: EluRex

BLinux

cat lover server enthusiast
Jul 7, 2016
2,672
1,081
113
artofserver.com
I run ZFS on Linux with 7 800GB SSD's, mostly Micron 500DC, and a single Intel S3700. Configured as a single stripe, I get full speed for everything. I've been running ClickHouse OLAP Database, number crunching at 10 GB/s (because of LZ4 compression), read speed from drives is around 3.5GB/s
can you share details? what's the HBA/system hw config? what type of vdev config? any particular settings/features in ZFS? any particular settings at OS level? also, how are you measuring your throughput?

i'm not anywhere close to your numbers and i'm using 12Gbps SSDs vs your 6Gbps... would love to learn more details if you are able to share...
 

BackupProphet

Well-Known Member
Jul 2, 2014
1,095
660
113
Stavanger, Norway
olavgg.com
There is nothing special, I use LSI 9211 IT-Mode for 5 of the drives and I use two of onboard SATA3 ports for the last two. Im planning to change to two LSI SAS3 HBA's soon. But current setup works fine.

It is standard Ubuntu server (17.10) on a X9DRI with two 2696 v2 cpus.
Compression is off, as Clickhouse does LZ4 compression. Though you should have LZ4 enabled by default if you know your data can be compressed.
Everything else are defaults.
The ZFS version is the one that comes with Ubuntu 17.10

Clickhouse has built-in throughput meter when you execute queries, for raw disk throughput I use iostat.
 

BLinux

cat lover server enthusiast
Jul 7, 2016
2,672
1,081
113
artofserver.com
There is nothing special, I use LSI 9211 IT-Mode for 5 of the drives and I use two of onboard SATA3 ports for the last two. Im planning to change to two LSI SAS3 HBA's soon. But current setup works fine.

It is standard Ubuntu server (17.10) on a X9DRI with two 2696 v2 cpus.
Compression is off, as Clickhouse does LZ4 compression. Though you should have LZ4 enabled by default if you know your data can be compressed.
Everything else are defaults.
The ZFS version is the one that comes with Ubuntu 17.10

Clickhouse has built-in throughput meter when you execute queries, for raw disk throughput I use iostat.
thanks. what type of vdev? raidzX? mirror? stripe?