Guys, a ZFS-fan from Belgium here .
I am running OpenIndiana 158a with a ZFS pool consisting of 4 disks and an SSD.
For a while now, I have at home, the following hardware:
SuperMicro X9SCA-F
Intel Xeon E1230
32GB of UDIMM ECC memory
4 HP branded Seagate disks (7.2K RPM, 80GB capacity)
1 Patriot SE Pyro SSD
IBM M1015 flashed to IT-firmware
This hardware is equipped with ESXi 5 in which I made a VM for OpenIndiana. This VM has 4GB of vRAM and the IBM M1015 (which is just a LSI 2008 controller now) passtroughed with VT-d to the VM...
I created a pool with Napp-it and did some performance benchmarks. You will find the results below:
As you can see, my IOPS are pretty good, 8000 read IOPS and 2000 write IOPS is not that bad I think, certainly not for 4 SATA disks which were just serverpulls (I think so, I found them on the shelf at the office in the staging area). However, according to a fellow "Tweaker" on the Dutch forum "Gathering of Tweakers" my performance of those 4K writes is below what to be expected... His post (literally translated) states the following:
I did the above tests because a colleague of mine (on of our engineers/pre-sales which is our promoted storage-specialist), stated that 4K writes/read are the most horrible and a reference to other (professional / enterprise) storage... Is this true? Have I ran reliable, meaningfull benchmarks? If I present these results to you guys, or to let's say some "reference website", are they representative? In other words: these benchmarks, do they give people an accurate (basic) view of the performance in the worst conditions and how is my array performing?
I am running OpenIndiana 158a with a ZFS pool consisting of 4 disks and an SSD.
For a while now, I have at home, the following hardware:
SuperMicro X9SCA-F
Intel Xeon E1230
32GB of UDIMM ECC memory
4 HP branded Seagate disks (7.2K RPM, 80GB capacity)
1 Patriot SE Pyro SSD
IBM M1015 flashed to IT-firmware
This hardware is equipped with ESXi 5 in which I made a VM for OpenIndiana. This VM has 4GB of vRAM and the IBM M1015 (which is just a LSI 2008 controller now) passtroughed with VT-d to the VM...
I created a pool with Napp-it and did some performance benchmarks. You will find the results below:
Code:
bart@OpenIndianaVirtual:/tank/test1$ dd if=/dev/zero of=zero.file bs=1M count=10240
10240+0 records in
10240+0 records out
10737418240 bytes (11 GB) copied, 63.0561 s, 170 MB/s
bart@OpenIndianaVirtual:/tank/test1$ dd if=zero.file of=/dev/null bs=1M
10240+0 records in
10240+0 records out
10737418240 bytes (11 GB) copied, 48.6626 s, 221 MB/s
(at that time I only had 1GB of vRAM, but I cranked it up to 4 with the results you can find above, but still the throughput with 4K is below the expected)Based on the fact that a single 80GB disk alone has a troughput of 50-75MB/s, your 35MB is slow indeed...
The cause of this? I don't know, but I suspect the memory
Is this true? Should I get an equal troughput as with the dd command? Why (not)?because with 1GB you have zero prefetching and caching. So everything is write-thru and read-thru
I did the above tests because a colleague of mine (on of our engineers/pre-sales which is our promoted storage-specialist), stated that 4K writes/read are the most horrible and a reference to other (professional / enterprise) storage... Is this true? Have I ran reliable, meaningfull benchmarks? If I present these results to you guys, or to let's say some "reference website", are they representative? In other words: these benchmarks, do they give people an accurate (basic) view of the performance in the worst conditions and how is my array performing?