10x Toshiba MG03ACA300 3TB Enterprise SATA $950 BO'd $800

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

JayG30

Active Member
Feb 23, 2015
232
48
28
38
My take-away from when that was all fresh was: Avoid MPT -> SAS Expander -> SATA on Solaris.

Before last year I worked for a hosting company with > 50,000 deployed servers. Many thousands of them with expander backplanes and SATA disks. Never did we suspect a problem with that combination of hardware but we also weren't deploying any Solaris.
Yea I can't remember the details exactly, but I still think it was BS or something specific to what nexenta was doing. I had used Solaris with this combination, as well as many others. Used opensolaris back in 09-11 and illumos when that took over the mantle. All the people I knew using quality server hardware and Solaris never had issues. Then the issue just disapated, and I seem to remember someone (maybe nexenta engineer) saying the issue was resolved but never any details released about what the issue was. I could be remembering it all wrong though.
 

PnoT

Active Member
Mar 1, 2015
650
162
43
Texas
Ordered up 5 that came in yesterday and 2 of them had 106 hr while the other 2 had 1060 hr on them.

What are you guys using to torture test these drives when you get them in?
 

CorvetteGS

Member
Jan 20, 2014
40
5
8
Atlanta, GA
Ordered up 5 that came in yesterday and 2 of them had 106 hr while the other 2 had 1060 hr on them.

What are you guys using to torture test these drives when you get them in?
My standard procedure for all hard drives is to do a badblocks write-mode test (which will do 4 passes writing to every single block on the drive and reading it back), then I run the SMART extended offline test. Check for no errors, both badblocks or SMART, then I'm good to go.
 
  • Like
Reactions: PnoT

canta

Well-Known Member
Nov 26, 2014
1,012
216
63
43
I know on the ZFS side there have been quite a bit made of using SATA disks + SAS expanders. You can find a long discussion of that HERE. Some Nexenta employee I believe it was made a huge deal about it. A few people called BS on it and said to provide proof since lots of people had been doing it for many years. The best information that was ever found to substantiate such a claim from my memory was some very old bug reports on Sun/Oracle boards I think it was. And I think the conclusion most people drew from it was bugs (or just nexenta specifically) breaking system drivers. After years of running SATA disks, and so many others doing the same, I think it is all a bit of BS. I would absolutely pick SAS disks if given the option but not for this reason, mostly for multipathing.
this is BS on my understanding.
I followed that thread and waiting the error msg mpt error in the kernel, No one posted it :p.


on my experience, when SATA drive is degrading, this would create many I/O reset on the linux that lower down ZFS (linux in my case) performance until. ZFS marks that drive is bad/offline.

this is true, (linux) kernel LSI module will send I/O reset when the drive is timing-out. hey this is SATA :D as long as not blow-up ZFS, we should be OK :D.

LSI puts cryptic msgs warning/error that can be dig more with free available pyhon script or dig to linux kernel module source codes.

one interesting found out when broken AIC backplane that not supplying enough power to drives, grrh.. LSI error shown many I/O reset too...

once again , this is BS on my knowledge.