Basically its a question performance vs security.
Performance wise, ZFS uses a writecache to collect small random writes for a few seconds and writes them the as a single large and fast sequential write. As ZFS is a copy on write filesystem, write actions are done in a consistent way (completely or discarded). A power outage in this situation does not affect pool consistency but a few seconds of last writes may be lost.
If you use databases with transactions or old filesystems like ext4 or ntfs on ZFS and the database or the OS writes data, they do not not care about atomic consistent writes or the ZFS writecache. So it can happen that dependent transactions (like financial transaction ex remove money from one account and add it then to another) of filesystem updates (modify data and then update metadata) are done only partly. In such a case you have transferred money to nirwana or your filesystem metadata is corrupt. This happens additionally to a corrupt file or database where you need additionally journaling to be protected. For a SMB filer, this is never a problem. On a power outage, the currently written file is damaged as well but ZFS remains always consistent and valid.
With hardware raid, you can use cache and BBU to reduce this problem. With ZFS you use sync write and a ZIL that logs all transactions. On a power outage all writes that are not on stable storage despite a commit to the database or OS are then written on next bootup.
So if you decide, you need such a security level, you need sync write - no discussion. Propably you discover, that secure sync write is much slower than the regular sequential write over the writecache, especially if you do not use fast SSD only pools.
This is where a Slog device helps. It allows to put the sync log files to a fast SSD and use the pool for regular sequential writes.
This behaviour affects local writes (Proxmox etc), NFS and iSCSI. With sync=default, a client can decide to use sync or not. ESXi requests sync over NFS. You can override this with sync=disabled. If your clinet does not request sync but you want you can enable with sync=always.
With iSCSI you have basically the same problem. If you use a zvol on Solaris and share it via iSCSI, the according sync setting is writeback-cache for the logical unit. If you disable write back cache sync write is forced and you want a Slog.
about 4 x Intel 730 480GB SSDs Performance
Asuming that a single SSD can give around 300 MB/s constant sequential performance
and around 15000 iops:
A raid - 10 of 4 SSD can give
600 MB sequently (2 x SSD) on writes and 30000 iops (2 vdevs)
compared to a Raid-Z1
900 MB/s sequentially (3 x datadisk) and 15000 iops (like a single ssd)
With that many iops, a Raid-10 construct is mainly for spindels.
With SSD only iops is mostly more than enough with Raid-Z (any)