I know on the ZFS side there have been quite a bit made of using SATA disks + SAS expanders. You can find a long discussion of that
HERE.
Some Nexenta employee I believe it was made a huge deal about it. A few people called BS on it and said to provide proof since lots of people had been doing it for many years. The best information that was ever found to substantiate such a claim from my memory was some very old bug reports on Sun/Oracle boards I think it was. And I think the conclusion most people drew from it was bugs (or just nexenta specifically) breaking system drivers. After years of running SATA disks, and so many others doing the same, I think it is all a bit of BS. I would absolutely pick SAS disks if given the option but not for this reason, mostly for multipathing.
this is BS on my understanding.
I followed that thread and waiting the error msg mpt error in the kernel, No one posted it
.
on my experience, when SATA drive is degrading, this would create many I/O reset on the linux that lower down ZFS (linux in my case) performance until. ZFS marks that drive is bad/offline.
this is true, (linux) kernel LSI module will send I/O reset when the drive is timing-out. hey this is SATA
as long as not blow-up ZFS, we should be OK
.
LSI puts cryptic msgs warning/error that can be dig more with free available pyhon script or dig to linux kernel module source codes.
one interesting found out when broken AIC backplane that not supplying enough power to drives, grrh.. LSI error shown many I/O reset too...
once again , this is BS on my knowledge.