Need help performance testing Nexenta CE 3.1.5

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

azev

Well-Known Member
Jan 18, 2013
769
251
63
I wanted a storage system that support VAAI and Nexenta CE fit the bill perfectly :). This is to be used as shared ISCSI target for 4 ESXi host on a C6100 platform.
The system is built on a supermicro chassis 836 with build in sas2 expander with the following hardware:

X8DTL-iF
24Gb Mem
Intel l5606 CPU
OneConnect 2 port 10Gb CNA in ethernet mode
16 x Hitachi 450Gb 15k Drives
LSI-9211 IT -- (1 port is broken)
1 x WD Raptor 150Gb -- Syspol

The zpool is created with 8x Mirror vdev with no L2ARC or ZIL (was planning to use the other 9211 port to connect a few SSD for this)
Compression=on, Dedup=off, Sync=normal.
I've created 3 x 1TB iscsi ZVOL and 3 ISCSI target with default setting, and connect it to ESXi via software ISCSI, multipath, IOPS =100

I've started migrating all my VM that is currently running on another storage server running windows and starwind, and immediately I noticed something weird.
During SVMOTION i noticed that the transfer speed on the windows box would burst to 100MB for a few sec and then drop to 10MB and it would go up again to 100MB.
It does that until the transfer is complete.

Performance testing using ATTO and IOMETER from a few windows VM shows an amazing read/write performance



Max Throughput 100% read



Random 8K - 70% read



Not sure if that random test shows good result or not, but the sequential definitely shows pretty good numbers.
However a very simple test of copying file from one windows vm to another file server i only got around 100MB speed.
Back when the VM was running on starwind I can get consistently around 200-300MB over 10Gb network.
Before I install nexenta, I've tested that each of the hitachi drive is pushing read/write of 200MB in ATTO.

What do you guys think of the server performance ?? is it normal ?
Does anyone have any suggestion on how to bench raw disk performance on ZFS ?
Any suggestion on how to tune this setup ?

Thanks
 

Anton aus Tirol

New Member
Oct 20, 2013
10
2
1
1) Response time looks good (sub 15ms), sequential I/O numbers indeed look pretty solid (for in-VM test).

Did you notice extra disk activity on transfer drops? Lazy writer flushing ARC cache? Does memory usage on host change?

2) StarWind V8 (currently in final beta, release is planned for March 2014) does have VAAI support so you was too fast to fix what was not really broken :)
 

azev

Well-Known Member
Jan 18, 2013
769
251
63
I did notice the extra disk activity on transfer drop. From what I can tell, all read & write are buffered in ARC and then as the ARC become full it flushed it to the disk.
The thing is, network transfer plunged when disk activity is on.
this is what the IOSTAT looks when I moved data from another file server to a VM:

 
Last edited:

azev

Well-Known Member
Jan 18, 2013
769
251
63
I've done alot more testing these past few days including enabling sync=always, turning on/off zfs_nocacheflush and the result are pretty much the same.
Turning on sync=always drop the write/read speed to only 30-40MB when transfering file to a vm.
With sync=normal i am getting average of 80MB transfer but when I used task manager to monitor networking, i noticed that utilization would go to 100% then drop to 30% then ramp up again to 100%, keep repeating until transfer is done.
Is this normal nature of ZFS ? Will I benefit from a dedicated ZIL ? (show performance zil does not show much utilization at all) or do I need more memory ?
I find this somewhat weird that my write shown above can go all the way up to 400M but cannot maintain a throughput of only 1Gbps.

Thanks