I would start with a local benchmark like bonnie (pool - benchmark) to check basic values.
Next, you should create a volume (menu disks - volume), say 100 GB, then go to menu Comstar, create a LU on it, then a target and a target group with the target as member and set a view from te LU to this target group. Now you can do Chrystalmark benches via iSCSI from Windows.
Performance relevant is the blocksize (prefer larger values for first tests) and the writeback cache setting (on=fast, off =secure but slow without ZIL) For first tests I would disable sync (writeback cache=on) and disable compress.
If you add a ZIL for fast secure sync writes, the size must be at least enough to hold 10s of network traffic (even for a single 10 Gb, a 8GB Zil can be enough like a ZeusRAM-the best of all), important is very low latency with very good write performance. A good cheaper ZIL SSD is a Intel S3700.
If you performance tests show massive better writes than reads, I would look also at the Windows side (driver, setting problems etc). Reads should always be better than writes, otherwise you have an additional problem (Windows, drivers, switch, compress, dedup, ashift etc)
compare my values with 10 Gb/ mostly sync vs nonsync
http://napp-it.org/doc/manuals/benchmarks.pdf
Benchmarks are usually values without regarding any caches unless you have a very large cache or very small benchmark files.If you are intereested in cached values, you need a very large cache or very small files. For reads you can push performance with a lot of RAM.
If you disable sync (=enable writeback with iSCSI), all writes for 5s are cached and written as one large sequential write. If you enable sync, this basic behaviour is the same but every single write command must be confirmed from the log device until they go to RAM and the next write command is processed. (ZIL is a separate logging mechanism, not like the regular ZFS write caching that transforms 5s of small writes to one large sequential write). This means, without sync writes, you can loose up to 5s of last writes. This does not affect your pool consistency as this is copy on write. Your file system will not see any corruption unless your pool goes offline if more disks fail than your redundancy level allows. In this case, your pool is lost. But if any disk come back, your pool is ok without any data corruption. You can pull disks during writes and beside last write your pool is ok when you reinsert the disks. This is much different to hardware raid, where such an action my be a reason for a loss af data and raid consistency.
For writes, you can push performance only with fast disks as your write cache is always max 5s of writes (i.e. 5s of your max network traffic)
If you want th compare L2ARC, check the primarycache and secondarycache ZFS properties (I have not tested L2ARC compress myself), see
https://www.illumos.org/issues/3137
other tuning options:
see
napp-it // webbased ZFS NAS/SAN appliance for OmniOS, OpenIndiana and Solaris downloads