Ok, been doing some tweaking, and using the benchmark>dd tool to test along the way to validate changes have improved performance.
Yesterday I was at about 150mb/sec on write tests. I added an ssd for log and another for cache. I am now pulling in 300-350+mb, but it stays in that area for larger file tests. In the past when I had more ram, and more CPU, I was able to consistently hit 800+mb/sec
As it sits, I would like to squeeze more out of this setup, and narrow down are the issues due to: disk space, cpu, memory, or configuration. Below are tests from today, 3 each of each test.
ZFS perf
Blocksize 2M
Count 100
Wait 40
Size of testfile 204.8MB
Write 204.8 MB in 0.2s = 1024.00 MB/s Write
Read 204.8 MB in 0.1s = 2048.00 MB/s Read
Write 204.8 MB in 0.1s = 2048.00 MB/s Write
Read 204.8 MB in 0.1s = 2048.00 MB/s Read
Write 204.8 MB in 0.1s = 2048.00 MB/s Write
Read 204.8 MB in 0.1s = 2048.00 MB/s Read
ZFS perf
Blocksize 2M
Count 1000
Wait 40
Size of testfile 2.048GB
Write 2.048 GB in 2.3s = 890.43 MB/s Write
Read 2.048 GB in 1.2s = 1706.67 MB/s Read
Write 2.048 GB in 2.4s = 853.33 MB/s Write
Read 2.048 GB in 1.1s = 1861.82 MB/s Read
Write 2.048 GB in 2.4s = 853.33 MB/s Write
Read 2.048 GB in 1.2s = 1706.67 MB/s Read
ZFS perf
Blocksize 2M
Count 6250
Wait 40
Size of testfile 12.8GB
Write 12.8 GB in 38.7s = 330.75 MB/s Write
Read 12.8 GB in 8.9s = 1438.20 MB/s Read
Write 12.8 GB in 38.3s = 334.20 MB/s Write
Read 12.8 GB in 9s = 1422.22 MB/s Read
Write 12.8 GB in 39s = 328.21 MB/s Write
Read 12.8 GB in 9.3s = 1376.34 MB/s Read
ZFS perf
Blocksize 2M
Count 10000
Wait 40
Size of testfile 20.48
Write 20.48 GB in 65.4s = 313.15 MB/s Write
Read 20.48 GB in 42.2s = 485.31 MB/s Read
Write 20.48 GB in 65.2s = 314.11 MB/s Write
Read 20.48 GB in 42.5s = 481.88 MB/s Read
Write 20.48 GB in 64s = 320.00 MB/s Write
Read 20.48 GB in 54.5s = 375.78 MB/s Read
It seems after the 2gb write test, performance falls flat on its face. As the file size increases, the read performance goes down as well. The read performance isn't a huge concern, but the write performance is.
System specs:
Intel s2400gp
2x Intel E5-2405L
24gb ram
esxi 5.5 w/ update 2
Omnios appliance, 18gb allocated. Pass through won't let me use more.
PCI pass through for an LSI sas3008
Currently there are 4 vdevs with 6 drives per in raidz1. Disks are Toshiba DT01ACA3. Pool cap is 79%. These are in external shelves.
What I am trying to determine what is the cause of the performance?
Is it due to pool cap?
Not enough ram?
Not enough CPU?
Configuration in question?
I'd appreciate any input. I do not see any data errors.
Yesterday I was at about 150mb/sec on write tests. I added an ssd for log and another for cache. I am now pulling in 300-350+mb, but it stays in that area for larger file tests. In the past when I had more ram, and more CPU, I was able to consistently hit 800+mb/sec
As it sits, I would like to squeeze more out of this setup, and narrow down are the issues due to: disk space, cpu, memory, or configuration. Below are tests from today, 3 each of each test.
ZFS perf
Blocksize 2M
Count 100
Wait 40
Size of testfile 204.8MB
Write 204.8 MB in 0.2s = 1024.00 MB/s Write
Read 204.8 MB in 0.1s = 2048.00 MB/s Read
Write 204.8 MB in 0.1s = 2048.00 MB/s Write
Read 204.8 MB in 0.1s = 2048.00 MB/s Read
Write 204.8 MB in 0.1s = 2048.00 MB/s Write
Read 204.8 MB in 0.1s = 2048.00 MB/s Read
ZFS perf
Blocksize 2M
Count 1000
Wait 40
Size of testfile 2.048GB
Write 2.048 GB in 2.3s = 890.43 MB/s Write
Read 2.048 GB in 1.2s = 1706.67 MB/s Read
Write 2.048 GB in 2.4s = 853.33 MB/s Write
Read 2.048 GB in 1.1s = 1861.82 MB/s Read
Write 2.048 GB in 2.4s = 853.33 MB/s Write
Read 2.048 GB in 1.2s = 1706.67 MB/s Read
ZFS perf
Blocksize 2M
Count 6250
Wait 40
Size of testfile 12.8GB
Write 12.8 GB in 38.7s = 330.75 MB/s Write
Read 12.8 GB in 8.9s = 1438.20 MB/s Read
Write 12.8 GB in 38.3s = 334.20 MB/s Write
Read 12.8 GB in 9s = 1422.22 MB/s Read
Write 12.8 GB in 39s = 328.21 MB/s Write
Read 12.8 GB in 9.3s = 1376.34 MB/s Read
ZFS perf
Blocksize 2M
Count 10000
Wait 40
Size of testfile 20.48
Write 20.48 GB in 65.4s = 313.15 MB/s Write
Read 20.48 GB in 42.2s = 485.31 MB/s Read
Write 20.48 GB in 65.2s = 314.11 MB/s Write
Read 20.48 GB in 42.5s = 481.88 MB/s Read
Write 20.48 GB in 64s = 320.00 MB/s Write
Read 20.48 GB in 54.5s = 375.78 MB/s Read
It seems after the 2gb write test, performance falls flat on its face. As the file size increases, the read performance goes down as well. The read performance isn't a huge concern, but the write performance is.
System specs:
Intel s2400gp
2x Intel E5-2405L
24gb ram
esxi 5.5 w/ update 2
Omnios appliance, 18gb allocated. Pass through won't let me use more.
PCI pass through for an LSI sas3008
Currently there are 4 vdevs with 6 drives per in raidz1. Disks are Toshiba DT01ACA3. Pool cap is 79%. These are in external shelves.
What I am trying to determine what is the cause of the performance?
Is it due to pool cap?
Not enough ram?
Not enough CPU?
Configuration in question?
I'd appreciate any input. I do not see any data errors.
Last edited: