How long is too long for you to not replace computer equipment due to reliability concerns?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

nk215

Active Member
Oct 6, 2015
412
143
43
50
Hello guys,

I have an ESXi (6.0) host running on X9SRL-F for a long time now. There's no need to upgrade to anything faster than the current E5-1650v2 on it for at least another many year (let's say 5). This computer is in a production environment so reliability is important. There's mechanically running inside the box beside the fans (all storage is SSD). I clean the fan about once/a year and it's in a low dust environment so there's typically nothing to clean.

At what point should I just replace the system due to old age (not performance)? For a long time, I had a duplicate motherboard on my hand for a quick swap out in case the motherboard goes bad but always ended up not using it due to an upgrade because of performance (knock on wood).

Do you have a motherboard that runs for 15+ years in a production environment?
 

Ralph_IT

I'm called Ralph
Apr 12, 2021
176
96
28
47
/home
Yes. We have an 18 yo HP server.
It was our primary DC until it began to respond too slow. Formatted and intalled a 2012R2 for testing purposes (apis, db and stuff like that). Don’t know why, but we seldom throw away old hardware.
Basically we wait until the system is about to die. So far, it has worked and we had enough time to buy the replacement or move all programs and services to another (that’s another history, though)
 
  • Like
Reactions: casperghst42

Wasmachineman_NL

Wittgenstein the Supercomputer FTW!
Aug 7, 2019
1,880
620
113
Yes. We have an 18 yo HP server.
It was our primary DC until it began to respond too slow. Formatted and intalled a 2012R2 for testing purposes (apis, db and stuff like that). Don’t know why, but we seldom throw away old hardware.
Basically we wait until the system is about to die. So far, it has worked and we had enough time to buy the replacement or move all programs and services to another (that’s another history, though)
holy shit your company has a almost 20 year old Nocona-based Xeon server running as primary Domain Controller? I sincerely hope that DC isn't mission critical!
 

RolloZ170

Well-Known Member
Apr 24, 2016
5,369
1,615
113
Do you have a motherboard that runs for 15+ years in a production environment?
had contact to motherboards from bowling-center scoring systems, and pinsetter electronics working for 25 years and more.
the main problem are the electrolytic capacitors becomes dry and unfunctional, worst case is they explode(bang).
 

JSchuricht

Active Member
Apr 4, 2011
198
74
28
I work on systems in a 24/7 manufacturing environment. In 2019 I worked for a company that still had NT4 running on some tools with hardware just as old. The company I work for now recently upgraded some Pentium M tool servers running XP to slightly newer systems running Windows 7 because we can't get IDE hard drives anymore. Systems can run well beyond their intended lifetime if you have enough spare parts.
 
  • Like
Reactions: T_Minus

Wasmachineman_NL

Wittgenstein the Supercomputer FTW!
Aug 7, 2019
1,880
620
113
I work on systems in a 24/7 manufacturing environment. In 2019 I worked for a company that still had NT4 running on some tools with hardware just as old. The company I work for now recently upgraded some Pentium M tool servers running XP to slightly newer systems running Windows 7 because we can't get IDE hard drives anymore. Systems can run well beyond their intended lifetime if you have enough spare parts.
wot


These are for laptops though. They do make IDE-SATA adapter cards but that might cross over into "uber janky" territory.
 

JSchuricht

Active Member
Apr 4, 2011
198
74
28
wot


These are for laptops though. They do make IDE-SATA adapter cards but that might cross over into "uber janky" territory.

Not that simple. The companies are resistant to change on toolsets and adapters like that aren't officially supported by the manufacturer of the equipment. Even Windows updates are off the table.
 

Wasmachineman_NL

Wittgenstein the Supercomputer FTW!
Aug 7, 2019
1,880
620
113
Not that simple. The companies are resistant to change on toolsets and adapters like that aren't officially supported by the manufacturer of the equipment. Even Windows updates are off the table.
Yeah I figured, I guess even a used but good HDD wouldn't be an option.
 

Terry Kennedy

Well-Known Member
Jun 25, 2015
1,142
594
113
New York City
www.glaver.org
Do you have a motherboard that runs for 15+ years in a production environment?
Well, the X8DTH-iF boards in my RAIDzilla II systems date from 2010-ish.

The longest-serving system here was a Dell PowerEdge R750 that dates from 2006. That was running an obsolete application for a long-time customer and we just this week got them moved off to a cloud hosting provider so that system is no longer our problem. It had been a bit of a challenge keeping that system going (mostly PSU problems).

I do have a bunch of PowerEdge R300 systems still running that get regular OS (FreeBSD) updates. They're providing name service for quite a few dozen domains. In the datacenter environment, if you don't pay for power and cooling (you normally get some number of kW included in the monthly rental) there's no real need to upgrade for power / efficiency reasons. And those systems have been rock-solid since they were installed.
 

JSchuricht

Active Member
Apr 4, 2011
198
74
28
Yeah I figured, I guess even a used but good HDD wouldn't be an option.
Used sometimes happens but it is usually in purchasing an entire tool which includes the servers, not individual components. It comes down to risk. We make 300mm pieces of glass that go "inside" stuff you use. It takes several months with thousands of operations at different toolsets for each unit. If the smallest change such as a drive causes a delay writing a file that changes the time an operation is done to the product and that issue slips by till that product goes to final testing then there could be hundreds of thousands of defective product.
 

Wasmachineman_NL

Wittgenstein the Supercomputer FTW!
Aug 7, 2019
1,880
620
113
Used sometimes happens but it is usually in purchasing an entire tool which includes the servers, not individual components. It comes down to risk. We make 300mm pieces of glass that go "inside" stuff you use. It takes several months with thousands of operations at different toolsets for each unit. If the smallest change such as a drive causes a delay writing a file that changes the time an operation is done to the product and that issue slips by till that product goes to final testing then there could be hundreds of thousands of defective product.
Mission-critical. I get it.
 

nk215

Active Member
Oct 6, 2015
412
143
43
50
Thank you everyone for your replies and stories. That makes me feel much better about my x9 board.
 

Tom5051

Active Member
Jan 18, 2017
359
79
28
46
depends how long you can live without it if it dies. Usually companies replace their stuff when the service contact expires.
 

Ralph_IT

I'm called Ralph
Apr 12, 2021
176
96
28
47
/home
...Usually companies replace their stuff when the service contact expires.
We are the most notorious exception to that rule. XD
Had a Proxmox in production with >=15 VM without license for more than 8 years.
Afraid to ask for an upgrade because we will never be able to justify why we need one if the machine runs well without one and it is "free" software.
 

nabsltd

Well-Known Member
Jan 26, 2022
422
284
63
I have an ESXi (6.0) host running on X9SRL-F for a long time now. There's no need to upgrade to anything faster than the current E5-1650v2 on it for at least another many year (let's say 5).
3x ESXi 6.5 here on the same motherboard, running since 2015. I recently bought 3x E5-2667 v2 to upgrade since those are really cheap. Since I'll need to upgrade 10Gbit NICs to support ESXi 7.0, I'm planning on doing a whole hardware refresh sometime next year.
But, all of this is voluntary, as the hardware is running great. I think if you aren't running above about 50% CPU utilization, the stress on everything (motherboard, CPU, fans, etc.) is so much lower than "planned" by the manufacturer that you can get 10-15 years without any issue.
 

nk215

Active Member
Oct 6, 2015
412
143
43
50
3x ESXi 6.5 here on the same motherboard, running since 2015. I recently bought 3x E5-2667 v2 to upgrade since those are really cheap. Since I'll need to upgrade 10Gbit NICs to support ESXi 7.0, I'm planning on doing a whole hardware refresh sometime next year.
But, all of this is voluntary, as the hardware is running great. I think if you aren't running above about 50% CPU utilization, the stress on everything (motherboard, CPU, fans, etc.) is so much lower than "planned" by the manufacturer that you can get 10-15 years without any issue.
What do you mean by "upgrade to 10Gb to support ESXi 7.0"? ESXi and 10Gb shouldn't have anything to do with each other.
 

nabsltd

Well-Known Member
Jan 26, 2022
422
284
63
What do you mean by "upgrade to 10Gb to support ESXi 7.0"? ESXi and 10Gb shouldn't have anything to do with each other.
I said I needed to upgrade my 10Gbit NICs to support ESXI 7.0. This is because they are using the old ixgb driver, which is not supported natively in ESXi 7.0.
 
  • Like
Reactions: T_Minus

nk215

Active Member
Oct 6, 2015
412
143
43
50
I said I needed to upgrade my 10Gbit NICs to support ESXI 7.0. This is because they are using the old ixgb driver, which is not supported natively in ESXi 7.0.
Understand now. What's your current NIC and what speed do you get under ESXi 6.5? I have the Mellanox Connect X-2 and they don't want to play nice with ESXi 6.0 at all (with 1.9.9 driver). If I pass-thru the NIC directly to Windows guests then they work.
 

acquacow

Well-Known Member
Feb 15, 2017
787
439
63
42
My whole homelab is running on X9SRL boards. I upgraded them to 2648L CPUs to save power and get more cores... no plans to upgrade them anytime soon.

My desktop is all X99 as well and running a 6950x CPU, again, for core count and the best IPC I could get on x99.

Newer stuff would be faster, but the speed isn't really a concern, this is plenty fast for homelab use.

I'm running Intel X540 nics for copper 10gige to the hosts and standard vmxnet3 nics or whatever to my guests and they all do and saturate 10gige just fine.
 
Last edited: