what your take on using ssd raid1 for system drive?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Evan

Well-Known Member
Jan 6, 2016
3,346
598
113
The same write patterns would be completely different if so much was one single byte was written to one drive initially and not the other. See butterfly effect.

I have has linux servers with raid1 OS disks (swap and tmp included) for years in an enterprise space and never seen it. I have seen failures sure but don't think I have even seen a single server have more than one SSD failure. I really don't think this is a concern now.

I would always use same type of disks unless it was a home server and software raid and I just happened to have 2 different ones rather than buy new.
 

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,625
2,043
113
Because SSDs often die from specific write patterns, and in raid1 all drives execute the same pattern. That would mean they are more likely to die at the same time, which would in practice mean the first one degrades the array and the second one dies when you resync a new disk (never finishing the sync).

These are basic SSD properties.

To make matters worse, SSDs are far less likely to announce their intent ahead of failure in SMART.
What drives?

What examples?

I asked earlier and would still like to know where this started ?

Are you saying that garbage collection, and organization of data on the nand is identical due to the firmware of identical drives? I haven't researched this but it seems very unlikely that they'd be. I understand they could/would likely run at the same time if identical drives/firmware but can't really see them using identical locations, etc... (my rough understanding / thoughts that even if wrote identical they still would unlikely be utilized identically to fail the same in reference to the actual storage?)
 

EffrafaxOfWug

Radioactive Member
Feb 12, 2015
1,394
511
113
Ladies and gentlemen I think we have one of those people who hasn't re-evaluated SSD technology and especially reliability since OCZ was a thing ;)

I hear that every time you write 0x1654dsfv to sector 184616548 of a RAID1 pair, god kills a kitten.
 

vl1969

Active Member
Feb 5, 2014
634
76
28
Speaking of OCZ, I had one die in my htpc this year. Was working for about a year no issues, than one day the pc simply did not reboot.
 

xnoodle

Active Member
Jan 4, 2011
258
48
28
I can go either way.
I found my stack of intel ssds that got bricked.. finally got around to unbricking them. stupid g3 firmware.
 

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,625
2,043
113
I can go either way.
I found my stack of intel ssds that got bricked.. finally got around to unbricking them. stupid g3 firmware.
5+ year old issue? Non-RAID related? How is this relevant?