Hardware failures in 2017 - Post yours!

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Deslok

Well-Known Member
Jul 15, 2015
1,121
125
63
34
deslok.dyndns.org
Just in time for christmas my C2750-D4i died... no post no ipmi not quite as spectacular as your heastink getting blown off though...
 

markpower28

Active Member
Apr 9, 2013
413
104
43
one month ago, my beloved ASA 5505 (since 2006) stop working. Then I realized it was due to over heating :(





Then I quickly find a good deal :)

 
  • Like
Reactions: Patrick

markpower28

Active Member
Apr 9, 2013
413
104
43
It was sitting between a switch and cable modem. Too much heat over time I guess. Asa was rock solid

Sent from my SM-G928V using Tapatalk
 

William

Well-Known Member
May 7, 2015
789
252
63
67
My second workstation which I mainly use for testing has been running this Intel DC P3700 for a couple years now as a OS drive - Win 10 Pro, no issues what so ever.
A couple days ago I finished up for the night and shut down the system, next morning I get back up and fire up the system... no boot.

Tried booting from the Win 10 DVD to do a repair... nothing, can't find the drive.
Motherboard is a ASUS X99-E WS/USB3.1 / 5960x. I went into the BIOS... no NVMe drive shows up :(
Pulled the drive out and plugged it into a Supermicro workstation... nothing shows up.

It just went BOOM, the drive is dead.

 
  • Like
Reactions: T_Minus

Kal G

Active Member
Oct 29, 2014
160
46
28
44
Had a fun one this morning. Walked into one of our data centers and discovered that one of our Liebert CRACs decided to leak all over the floor. Turned out the drainage line was partially plugged by a piece of debris. I about had a heart attack when I saw the giant puddle of water flowing under one of our UPS racks.

No photos on this one. Was too busy with damage control. We were lucky that there was no permanent damage to the equipment.
 

William

Well-Known Member
May 7, 2015
789
252
63
67
That's very possible.
I don't know about excessive write cycles, wouldn't it be pretty hard to write 17 drive writes per day with it being used as an OS drive ?
I could see that happening as a heavy use media drive tho.

My first guess would be the Frimware became messed up.
I have seen this happen on some ES drives I have tested. New Firmware flash fixed the problem.
 
Last edited:

Patrick

Administrator
Staff member
Dec 21, 2010
12,535
5,862
113
Could it be possible it died from excessive read/writes on a daily usage? Intel has it specified for about 17 drive writes per day and being an OS drive that is quite possible there may have been excessive write cycles over the span of those couple years.
Extraordinarily difficult to write that much to a P3700 without trying. Doubly so on an OS drive.
 

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,773
2,150
113
That's quite possible but I know when working with some SSD's from personal experience that even as an OS drive even with normal activities it's quite easy to do alot of read and write cycles in a day. Even installing games , downloading to the OS drive, copying files etc... take up a good amount of read/write but that should be somewhat offset by garbage collection or TRIM. Even running benchmarks can easily give a good amount of read/write GB's from. That's just my personal experience however working with SSD's.

William that is another valid possibility is the firmware. I've seen hard-drives and even SSD's that eventually brick themselves or stop working due to firmware bugs or usage especially if they are on 24/7. It's strange that it manifested itself after a few years normally those kind of bugs should occur soon and not within years. For example the Old WD Velicoraptor from 2008 (300GB) had a bug whereby after 49 days the drive would brick itself if it was in constant usage due to a firmware bug/issue.
"A lot" in a day is not in same ballpark as actual over utilization in a desktop PC as an OS drive when you're using a drive designed for enterprise write intensive work loads.

400GB: 7.3PBW
800GB: 14.6PBW
(10 drive writes/day*)
1.6TB:43.8PBW(15drive writes/day*)
2.0TB:62.05PBW(17drive writes/day*)

And that's just what Intel says they're good for!! I'd bet they're even better.
 

niekbergboer

Active Member
Jun 21, 2016
168
77
28
47
Switzerland
So the SuperMicro X11SSL-F in one of my Proxmox VE nodes went belly up. Fair enough: it's four months old, so I call the shop and get an RMA number without issue. I remove the cooler and the CPU, and then, as I try to put the CPU cap back in (and I didn't read the manual, "coz I'm an engineer"...) I bend half the CPU pins on the mobo. Grumble! Well, it's only $200, so that's bearable.
 

SycoPath

Active Member
Oct 8, 2014
139
41
28
One of my 4tb HGST Deskstar NAS drives starts throwing errors and drops out of the array. Fair enough, its 3.5 years old with 24x7 usage. So I jot down the serial number, power down FreeNAS and go find the drive and pull it so I can RMA it. I press power button on the server, my UPS starts screaming at me and immediately powers off showing overload on the display. I'm thinking WTF? So I unplugged the server, powered the UPS back on, and is showing a bad battery. OK, fine. I salvage a 12v 9ah battery that was good from the last set of batteries I replaced and I'm back up and running. I again attempt to power on the server and hurricane Intel spontaneously forms in my basement accompanied by obnoxious beeping, otherwise, the server boots fine (whew!) Apparently, a fan was spinning just fine until it got a chance to stop moving, and decided, nope, I'm done, not turning back on again.

So, what did we learn? Never turn off your servers! LOL :D
 

Tom5051

Active Member
Jan 18, 2017
359
79
28
47
One of my 4tb HGST Deskstar NAS drives starts throwing errors and drops out of the array. Fair enough, its 3.5 years old with 24x7 usage. So I jot down the serial number, power down FreeNAS and go find the drive and pull it so I can RMA it. I press power button on the server, my UPS starts screaming at me and immediately powers off showing overload on the display. I'm thinking WTF? So I unplugged the server, powered the UPS back on, and is showing a bad battery. OK, fine. I salvage a 12v 9ah battery that was good from the last set of batteries I replaced and I'm back up and running. I again attempt to power on the server and hurricane Intel spontaneously forms in my basement accompanied by obnoxious beeping, otherwise, the server boots fine (whew!) Apparently, a fan was spinning just fine until it got a chance to stop moving, and decided, nope, I'm done, not turning back on again.

So, what did we learn? Never turn off your servers! LOL :D
Geez these drives still seem to be unreliable after 17 years. I remember a couple of IBM (hdd sector bought by Hitachi in early 2000s) 40GB Deskstar 7,200s that I had in 2000. I must have RMAed those drives 20 times, they would send back a refurbished drive, it would last a week tops and back it would go. Never touching one of those again!
 

SycoPath

Active Member
Oct 8, 2014
139
41
28
Geez these drives still seem to be unreliable after 17 years. I remember a couple of IBM (hdd sector bought by Hitachi in early 2000s) 40GB Deskstar 7,200s that I had in 2000. I must have RMAed those drives 20 times, they would send back a refurbished drive, it would last a week tops and back it would go. Never touching one of those again!
1 drive out of 12 of the same model, and I have 44 ultrastars spinning away with no issues so far! Probably just jinx'd myself though. Even Backblaze data seems to show they are better than average. Seagate however, I wouldn't take for free.
 

Tom5051

Active Member
Jan 18, 2017
359
79
28
47
1 drive out of 12 of the same model, and I have 44 ultrastars spinning away with no issues so far! Probably just jinx'd myself though. Even Backblaze data seems to show they are better than average. Seagate however, I wouldn't take for free.
I'd tell you how many failed Seagate drives I've had to replace over the years but I can't count that high :p
1999 a client company had a single system drive on the NT4 server die and it cost them more than $2m in down time. They spent loooots of money on new servers after that. It was a Seagate!
 

Tom5051

Active Member
Jan 18, 2017
359
79
28
47
I'd tell you how many failed Seagate drives I've had to replace over the years but I can't count that high :p
1999 a client company had a single system drive on the NT4 server die and it cost them more than $2m in down time. They spent loooots of money on new servers after that. It was a Seagate!
Have a Couple of Western Digital 2TB black RE4s with nearly 65,000 hours on them. Still going strong. Knock on wood.