Why people are selling ancient server?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

hotfur

New Member
Aug 16, 2022
3
0
1
Interestingly, some ebay seller is selling ancient servers like the HP DL580, Dell R430 or similar obsolete servers for an incredible amount of money. Assuming someone bought the servers, the electrical consumption per performance is just too high in comparison with more recent options.
What are the use cases of these old servers? Who would buy these?
 

alaricljs

Active Member
Jun 16, 2023
199
74
28
Companies that want a direct swap because they think it's too time consuming to upgrade in the face of a failed system. That's not a lot of money for a business that measures thousands of dollars per outage minute. Yes there are still businesses that leave stuff that's working but extremely frail (due to poor solution design) alone until it breaks.
 

SnJ9MX

Active Member
Jul 18, 2019
130
84
28
I'd rather have a R630 that idles at 58W for my homelab than the new stuff that's 150-250W idle considering it rarely goes above 10% CPU usage. Not to mention you can build up a nice R630 for $450ish with 128GB memory + storage.

1x e5-2690v4 Single thread = 2000 ish, multi-thread = 20k. Realistically, for EPYC, gen 1 does not match single thread. gen 2 starts getting there. gen 3 is too expensive for hobby at this point.
 

zack$

Well-Known Member
Aug 16, 2018
721
351
63
I'd rather have a R630 that idles at 58W for my homelab than the new stuff that's 150-250W idle considering it rarely goes above 10% CPU usage. Not to mention you can build up a nice R630 for $450ish with 128GB memory + storage.

1x e5-2690v4 Single thread = 2000 ish, multi-thread = 20k. Realistically, for EPYC, gen 1 does not match single thread. gen 2 starts getting there. gen 3 is too expensive for hobby at this point.
This is the real issue for future homelabs. Enterprise is moving towards higher power usage per server that allows for, amongst other things, consolidation.

Higher power usage in enterprise is leading to more usage of liquid immersion and liquid cooling being built into the systems, which can be a problematic for your normal homelabberb to maintain.

The way I see it is that homelabbers may be relegated to edge category equipment in the future..not that we ever needed heavy iron to begin with :p
 

NPS

Active Member
Jan 14, 2021
147
44
28
1x e5-2690v4 Single thread = 2000 ish, multi-thread = 20k. Realistically, for EPYC, gen 1 does not match single thread. gen 2 starts getting there. gen 3 is too expensive for hobby at this point.
How does this compare to an EPYC 7371 (high frequency part)? In the end an EPYC 7302 should almost always be the better pick, but I thought the 7371 could be the Naples part that is comparable to an E5 2690 V4 performance wise. I had a very basic Naples system idling between 40W and 45W. So I guess power consumption could be similar.
 

CyklonDX

Well-Known Member
Nov 8, 2022
857
283
63
Those aren't that old. Sandy bridge is still fair game in for big storage like zfs systems.

There are still companies running Nahalem or older servers.
There are also people who want to start homelab but have no $$$. They often start with nahalem series servers that cost around 200 usd.



Top xeon v4 (like e5-2690v4) has very similar performance as Gold 6132 (skylake), and just bit behind (6%) in single core performance to Cascade Lake Xeons like Gold 6230R.
 
  • Like
Reactions: hotfur

ericloewe

Active Member
Apr 24, 2017
295
129
43
30
Specialty thing for applications where arguing with the vendor over what it should run on is a lost cause.
Pretty high-spec configuration, though they could have at least used an R630 as the base. Not a bad price if you take the various components into account. Still has a few years left in it for many applications. Gen 13 is still receiving firmware updates, the latest system firmware version is from November 2023. If you don't need the latest and greatest, one of these is still very capable.
 

SnJ9MX

Active Member
Jul 18, 2019
130
84
28
How does this compare to an EPYC 7371 (high frequency part)? In the end an EPYC 7302 should almost always be the better pick, but I thought the 7371 could be the Naples part that is comparable to an E5 2690 V4 performance wise. I had a very basic Naples system idling between 40W and 45W. So I guess power consumption could be similar.
EPYC 7371: single thread = 2377, multi = 31k
Xeon e5-2690v4: single thread = 2076, multi = 20k
EPYC 7302: single thread = 2013, multi = 33k

EPYC 7371 cost: $107
2690v4 cost: $22.. well it was a couple months ago when I bought 4. now the lowest on ebay is $34
EPYC 7302 cost: $200? for dell locked

 

NPS

Active Member
Jan 14, 2021
147
44
28
EPYC 7371 cost: $107
2690v4 cost: $22.. well it was a couple months ago when I bought 4. now the lowest on ebay is $34
EPYC 7302 cost: $200? for dell locked
7302 is more like 80 for Dell or 150 unlocked. In the end I personally think CPU prices don't matter that much at this level. Depending on what you need at system level, either Xeon will be much cheaper (used 1U boxes) or EPYC opens completely new possibilities at similar price point (used ATX board like H11SSL-i in desktop case).
 

garbled

New Member
Jun 30, 2020
6
0
1
My homelab still has a mix of 2670 V0/2, X5670, and even a few X3400. I like running bigger iron. I don't have a good justification for you other than I enjoy it alot. I still live by the very old mantra that if a server doesn't kill you when it falls on you, it's not a server.
 

SnJ9MX

Active Member
Jul 18, 2019
130
84
28
7302 is more like 80 for Dell or 150 unlocked. In the end I personally think CPU prices don't matter that much at this level. Depending on what you need at system level, either Xeon will be much cheaper (used 1U boxes) or EPYC opens completely new possibilities at similar price point (used ATX board like H11SSL-i in desktop case).
Believe me, I had this mental battle quite a bit over the last few months. I have a R630 in a datacenter in Denver (1x 2690v4, 8x16GB, 4x 960GB D3-S4610, 2x 1.92TB PM863, 1x Optane DC P1600X). I built an essentially equivalent Supermicro 1U at home (same CPU/memory, but with only 2x 3.2TB Micron 7300 MAX for storage). Both are to support an app idea I'm working on, with the data center being primary and my home as failover.

I really wanted to make EPYC work. But the officially packaged systems are not cheap, even for barebones. The Dell R6415, for example, cannot accept 2nd gen EPYC which makes it even less of a value case. 1st gen EPYC is not an upgrade from xeon v4. Getting a 2U-4U case and fitting it out isn't exactly cheap either, even with getting, for example, a SM H11/H12 board with CPU from China (from TUGM4470 (sp)). I am limited to 1A at 120V for my data center install ($55/mo). Don't want a high idle power usage for my home either. I am by no means CPU limited or PCIe lane limited. These are all the relevant tradeoffs.

My thought process is - if the app idea takes off, I will have plenty of $$$ to do 2nd hand EPYC. If it doesn't, not much lost. Or I can spin up some instances in Azure US West Central (Cheyenne, WY - only 5.0 milliseconds from the Denver datacenter). I've spent probably $900-1000 total for both, excluding the Intel DC SSDs. Found the 3.2TB microns for $80 each on here ;) (https://forums.servethehome.com/index.php?threads/micron-7300-max-3-2tb-80-usd.41792/)
 

NPS

Active Member
Jan 14, 2021
147
44
28
Believe me, I had this mental battle quite a bit over the last few months.
Seems to align nicely to my thoughts, which I tried to describe in my last post. :)
Xeon is much cheaper for the complete system on the used market. EPYC can be interesting if you need the platform advantages or the cores. An EPYC in a desktop case can be interesting because CPUs and ATX boards are quite ok price wise and the rest is cheap if you use desktop parts. Did that with an 7551 (used) on an H11SSL-i R2.0 (new) 2.5 years ago. Was more expensive at that time but is still very capable and has served me well for CPU intense jobs.
 

hotfur

New Member
Aug 16, 2022
3
0
1
I'd rather have a R630 that idles at 58W for my homelab than the new stuff that's 150-250W idle considering it rarely goes above 10% CPU usage. Not to mention you can build up a nice R630 for $450ish with 128GB memory + storage.

1x e5-2690v4 Single thread = 2000 ish, multi-thread = 20k. Realistically, for EPYC, gen 1 does not match single thread. gen 2 starts getting there. gen 3 is too expensive for hobby at this point.
Really? So newer servers have poorer power saving features?
But I do agree that single-threaded wise the Xeon v4 would still be a very good option.
Those aren't that old. Sandy bridge is still fair game in for big storage like zfs systems.

There are still companies running Nahalem or older servers.
There are also people who want to start homelab but have no $$$. They often start with nahalem series servers that cost around 200 usd.



Top xeon v4 (like e5-2690v4) has very similar performance as Gold 6132 (skylake), and just bit behind (6%) in single core performance to Cascade Lake Xeons like Gold 6230R.
Oh right they can still serve well if the purpose is storage. In this case newer CPU might not have that much value over the older ones.
 

hotfur

New Member
Aug 16, 2022
3
0
1
This is the real issue for future homelabs. Enterprise is moving towards higher power usage per server that allows for, amongst other things, consolidation.

Higher power usage in enterprise is leading to more usage of liquid immersion and liquid cooling being built into the systems, which can be a problematic for your normal homelabberb to maintain.

The way I see it is that homelabbers may be relegated to edge category equipment in the future..not that we ever needed heavy iron to begin with :p
You make me think of liquid helium cooled quantum computers.
 

ericloewe

Active Member
Apr 24, 2017
295
129
43
30
Really? So newer servers have poorer power saving features?
Part of the cloud push is that on-premises is inefficient due to hardware wasted idling. As hyperscalers grow ever more in relevance, their priority - efficiency under heavy load - beats out the priorities of us mere mortals - such as idle power consumption.

You can probably save power still, but you would need to seriously consolidate down to realize idle power savings. If you're running a single machine anyway, there's no way to consolidate within the same class - you'd need to move down to something like Xeon-D (which has also grown in idle power!) or even Atom (good ol' C3000 is still very viable for many applications), but good luck finding either at the low, low prices of a used R630 (especially the 8-bay models).
 

Fritz

Well-Known Member
Apr 6, 2015
3,391
1,393
113
70
Far too many eBay sellers are trolling for suckers, that's their game.
 

XeonLab

Member
Aug 14, 2016
43
14
8
This is the real issue for future homelabs. Enterprise is moving towards higher power usage per server that allows for, amongst other things, consolidation.

Higher power usage in enterprise is leading to more usage of liquid immersion and liquid cooling being built into the systems, which can be a problematic for your normal homelabberb to maintain.

The way I see it is that homelabbers may be relegated to edge category equipment in the future..not that we ever needed heavy iron to begin with :p
Not only cooling & power consumption but also price.

Had a check lately on DDR5 RDIMM board prices (Sapphire Rapids) and my nose is still bleeding...I mean in the old days you could buy a top platform (E5 V2/V3/V4/Skylake-SP) poverty-spec MB for roughly 300 EUR/USD, get a cheap ES/QS CPU for it and make a cheap CPU upgrade later when big boys decoms flood eBay. Now the "same" board goes for 600 and cheapest Bronze Xeon's go for 500. And to make matters worse, that same board will still cost 600 when Sapphire Rapids decom comes in 2030, isn't it lovely? :)

Not that I have a huge trust in modern hardware lifetime either, die shrinks have pushed things to extremes, ATX PSU spec is on the limit and JEDEC was forced to introduce on-die ECC for DDR5. At least the big boys in datacenters have 100G/400G Ethernet when us mere mortals can only dream about P2P fiber and have to live with GPON.
 

XeonLab

Member
Aug 14, 2016
43
14
8
adapters are kinda cheap (100€ for a single port cx-4 in the eu), but switches are darn expensive
OEM stuff or magic search keywords?

And I forgot where we are, should have only mentioned 400G :D It's just the home networking stagnation turbocharged with COVID era supply/hardware scarity that irks me greatly.