How would this fare split up to be used as both SLOG/ZIL and L2ARC?
You are talking my language as I've been agonizing over this setup I have for years... Lol.
When thinking about a SLOG device, the faster the fsyncs, the faster it will be for VMs, k8s (and etcd!!), NFS, and anything that uses a lot of fsync.
Next, people want fast transfers over their 10G networks. However, if you have a SLOG device, it doesn't matter how many vdevs you have for striped throughput: you'll be limited to however your SLOG vdev is setup and just it's devices.
On a scale of 1 to 10 whereas 1 equates to the ultra low latency of the (newer) Intel Optane, and 10 represents the general average NVMe PCIe U.2 SSD latency, this PM1725 drive scores about a 3 to 4, depending on the generation.
That's fast. That's very very fast. And, the prices of NVMe PCIe drives are even more money!
The last word on performance is with queue depths. Read up on this. This is where the PM1725 really really shines, and even beats the 905P Optane in a few (select, but high IOPS concurrency) benchmarks. Check out the video I just posted above your post.
With that said, you should think about logistics. 3.2 TB is extreme overkill for a SLOG, which is usually, at a max of 50 GB - and that's if you have a 100 Gbps network.
Yes, you could slice off a chunk of 50 GB for SLOG, and use the rest for a massive L2ARC.
But even that's a waste because L2ARC is not persistented between boots. Edit: Ah, ZFS can now persist L2ARC.
And IIRC, the last note on L2ARC would be that you could actually slow reads down with this drive. Given, you'd actually need one hell of a stripe across fast vdevs to exceed the PM1725.
---
IMO, get a $230-ish Optane 900P which is only 280GB (they are all 280GB). Assign 50GB to your SLOG, and maybe 100GB to the L2ARC. Tune as needed (or even create yet more partitions for more pools!).
Use this PM1725a for 100% of your heaviest fsync I/O. VMs, k8s etcd, databases, container volumes, etc.