Help to determine performance w/1015 + Openindiana + ESXi

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

joelones

New Member
Sep 5, 2013
3
0
0
Hi all,

First post, so be gentile.

I need help in determining if I'm attaining decent speeds with a M1015 passed-through to an Openindiana (build 151a8 server) VM on ESXi 5.1 host

The host is as follows:
ASRock 970 EXTREME4, CPU: FX-8320, Memory: 16GB | RAID: 1 X M1015 Flashed IT Firmware (v15)

The Vanilla Openindiana VM is configured as follows:
2 vCPU, 5120MB of RAM.

ZFS raidz pool consisting of 4 1TB drives:
  • Seagate ST31000528AS
  • Seagate ST31000528AS
  • WD WD10EAVS
  • WD WD10EAVS
First question: when people quote there speeds with dd, they aren't doing so with compression turned on, correct?

With compression (dedup + compression)
Code:
oot@openindiana:/tank# dd if=/dev/zero of=/tank/zerofile.001 bs=1M count=10000
10000+0 records in
10000+0 records out
10485760000 bytes (10 GB) copied, 5.1869 s, 2.0 GB/s
root@openindiana:/tank# dd if=/tank/zerofile.001 of=/dev/null bs=1M
10000+0 records in
10000+0 records out
10485760000 bytes (10 GB) copied, 1.81395 s, 5.8 GB/s
I'm assuming the above is meaningless with compression.

With no compression
Code:
root@openindiana:/tank# dd if=/dev/zero of=/tank/zerofile.002 bs=1M count=10000
10000+0 records in
10000+0 records out
10485760000 bytes (10 GB) copied, 63.6593 s, 165 MB/s
root@openindiana:/tank# dd if=/tank/zerofile.002 of=/dev/null bs=1M
10000+0 records in
10000+0 records out
10485760000 bytes (10 GB) copied, 38.5189 s, 272 MB/s
I would have expected higher write speeds, any suggestions here would be really helpful on what I can do to increase the write speed?

Thanks
 

MiniKnight

Well-Known Member
Mar 30, 2012
3,072
973
113
NYC
Slightly lower than I would expect. Is dedupe on in the second one? Write speed on RAID-Z and RAID 5 isn't strong.
 

mrkrad

Well-Known Member
Oct 13, 2012
1,244
52
48
raid-5 in zfs (RAID-Z1) is brutal since it has to read and check the read , without hardware assist. into the cpu (cache) back out , back in, back out. Fact is that has overhead and there is no magic to speed. You can add some ssd to the mix but eventually that will run out.

for ESXi the key element you need to worry about is latency for the tasks - do not worry about linear performance. What will cause your esxi host to destabilize is poor latency. Fire up the esxi host and run 3 or 4 vm's with CDM or other pure random workloads (read/write mix concurrently). If you start to see latency warnings in the event log, trouble.

Typically latency warnings should only occur a few times during snapshot/scsi-hot-add backups (veeam). they preclude the "datastore heartbeat loss" which is pretty catastrophic.

using 4 1TB sata consumer drives, is a sure way to bring out this datastore lost ! Give it a shot man. You will see what I mean.

Datastore lost can happen in less time than a TLER timeout of 7 seconds! a good raid system will continue to function without latency when a drive starts doing its business of remapping under load.

I would strongly recommend running a benchmark and doing migrate (or svmotion MOVE TO) to stress test your system. say 3 vm's running CDM at the same time ,and a migrate at the same time. that's a good way to stress all components especially with thin provisioning enabled.
 

joelones

New Member
Sep 5, 2013
3
0
0
thanks for the suggestion mrkrad.

do people normally quote their benchmarks with dedup on or not?

here is benchmark with dedup on, is /dev/zero representative of real data?

Code:
root@openindiana:~# dd if=/dev/zero of=/tank/zerofile.003 bs=1M count=10000
10000+0 records in
10000+0 records out
10485760000 bytes (10 GB) copied, 31.9325 s, 328 MB/s