ZFS array performance review (?)

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

hyperbart

New Member
Jun 17, 2012
1
0
0
Guys, a ZFS-fan from Belgium here ;) .

I am running OpenIndiana 158a with a ZFS pool consisting of 4 disks and an SSD.

For a while now, I have at home, the following hardware:

SuperMicro X9SCA-F
Intel Xeon E1230
32GB of UDIMM ECC memory
4 HP branded Seagate disks (7.2K RPM, 80GB capacity)
1 Patriot SE Pyro SSD
IBM M1015 flashed to IT-firmware

This hardware is equipped with ESXi 5 in which I made a VM for OpenIndiana. This VM has 4GB of vRAM and the IBM M1015 (which is just a LSI 2008 controller now) passtroughed with VT-d to the VM...

I created a pool with Napp-it and did some performance benchmarks. You will find the results below:



Code:
bart@OpenIndianaVirtual:/tank/test1$ dd if=/dev/zero of=zero.file bs=1M count=10240
10240+0 records in
10240+0 records out
10737418240 bytes (11 GB) copied, 63.0561 s, 170 MB/s
bart@OpenIndianaVirtual:/tank/test1$ dd if=zero.file of=/dev/null bs=1M
10240+0 records in
10240+0 records out
10737418240 bytes (11 GB) copied, 48.6626 s, 221 MB/s
As you can see, my IOPS are pretty good, 8000 read IOPS and 2000 write IOPS is not that bad I think, certainly not for 4 SATA disks which were just serverpulls (I think so, I found them on the shelf at the office in the staging area). However, according to a fellow "Tweaker" on the Dutch forum "Gathering of Tweakers" my performance of those 4K writes is below what to be expected... His post (literally translated) states the following:

Based on the fact that a single 80GB disk alone has a troughput of 50-75MB/s, your 35MB is slow indeed...
The cause of this? I don't know, but I suspect the memory
(at that time I only had 1GB of vRAM, but I cranked it up to 4 with the results you can find above, but still the throughput with 4K is below the expected)

because with 1GB you have zero prefetching and caching. So everything is write-thru and read-thru
Is this true? Should I get an equal troughput as with the dd command? Why (not)?

I did the above tests because a colleague of mine (on of our engineers/pre-sales which is our promoted storage-specialist), stated that 4K writes/read are the most horrible and a reference to other (professional / enterprise) storage... Is this true? Have I ran reliable, meaningfull benchmarks? If I present these results to you guys, or to let's say some "reference website", are they representative? In other words: these benchmarks, do they give people an accurate (basic) view of the performance in the worst conditions and how is my array performing?
 

cactus

Moderator
Jan 25, 2011
830
75
28
CA
If you have 7200.7s, then you are going to be <60MB/s per disk. They are only rated at max 58MB/s on the outer edge and the 80GB were single platter so I would guess even less than that. I would say 170MB/s from 4 is good. I have two 160GB Seagates from around that time and they are very slow with read and write.

As to the no caching with 1GB of vram, this and my quick skim of the thread linked in the article suggests only 256MB of ram will be saved for system and the other 768MB is free for use as ARC(zfs cache).

What benchmark did you use to get the 4k IOPS? If I understand your last paragraph correctly, yes, 4k with queue depth of 1 gives you the worst performance. 4k performance should increase as queue depth increases as the scheduler can better serialize(less time seeking) your read/writes. 4k performance is also an important metric for a storage server serving multiple clients.