I have a number of ZFS-based systems that use SATA drives behind expanders and they work fine. I would not recommend any SAS gen1 (3Gb/s) gear, but the later LSI SAS2 and SAS3 stuff is fine (LSI 2008 controllers and later). I would also stick to reasonably reliable newer (~2013 and later) drives, i.e. not first-gen 1TB WD green drives.
I think the primary reasoning against this in the past is that a balky SATA drive that is failing and experiencing command timeouts (due to either controller or media failures) can cause the expander to drop multiple disks out of the array, not just the failing drive.
Linux raid and ZFS handle this pretty gracefully. The Linux mpt2sas driver will generally reset the entire expander and then try to start talking to the drives again. This causes a brief hiccup. If the failing drive continues to timeout you'll see this repeatedly. Occasionally, if you are unlucky, some other good drives may not come back from a reset.
Again, Linux raid and ZFS generally handle this without data loss. If these resets go on too long, you may lose drives out of the array and enter a degraded state and need to reboot the system to get them back. But it's unlikely you'll lose the filesystem, assuming you have protection (mirroring, raidz, raidz2, etc.)
This sounds awful but, unless you are trying to run an extreme commercial uptime environment, it's not a big deal. If all of your drives are in good health, you won't ever see a problem. If a drive starts failing, swap it as soon as you notice it causing errors.
I don't know the state of this on other ZFS-supporting OSes. They may be more or less forgiving on the LSI SAS driver-level reset behavior. If so, I would expect you would get to the "degraded array -- time to reboot" state quicker but you still probably wouldn't lose any data. At that point, swap the drive that is causing the problems.