i9-7980XE $426USD

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Wasmachineman_NL

Wittgenstein the Supercomputer FTW!
Aug 7, 2019
1,871
617
113
I'm guessing a comparable Supermicro motherboard and a 1st gen Epyc 16 core for about $800 would get other forum members votes here?
(Plus ECC & IPMI support)

Edit for clarity
First gen Epyc has NUMA and is so-so compared to something like Rome.
 

WANg

Well-Known Member
Jun 10, 2018
1,302
967
113
46
New York, NY
First gen Epyc has NUMA and is so-so compared to something like Rome.
Yeah, but Rome is also newer, so you'll have to pay today's prices.

Do you really need 16 cores, though?
As a person putting together a home lab, what kind of platform with similar specs could you use as a server in this price range?

Something 16+ cores and supporting a decent amount of memory.
Granted there's no ECC or IPMI.

I purchased one, found a good looking (quality) motherboard and I'm up to almost $750. I was looking at getting another but if someone can make a recommendation for something else I'll take a hard look at it.
Just a novice's take.

Also, from what I've seen, the performance uplift from the newer Intel HEDT generations is pretty small.
Edited for clarity
It isn't. I never considered the HEDT premium to be worth the outlay.

As for specific recommendations - it depends on what the power budget looks like, cooling conditions, room to house it, upfront purchasing cost and other factors. For a homelab like mine I have an internal limit of 20 Amps per circuit, 27 cents per kwh, 55 dB noise ceiling and a practical weight limit on what my furniture can hold. I am also not about to commit wallet hari-kiri and then keep punching while its down just to satisfy the local electric utility, so nothing that has a 3 digit TDP for the reasons given above.

If I have more than a few bits of coin to toss around? I'll probably go with a Supermicro E301-9D (Epyc 3251), as much RAM as I can reasonably afford, connect it to a dedicated NAS, use it for Proxmox or k8s, double up/cluster them together, and upgrade as the budget permits. That being said, considering how fast things become obsolete I don't even want to make any big splurges this year if I can help it. If someone liquidates an HPE EL1000 10GbE chassis with an m510/D1587 cart for around 600 bucks, I might take it - I think its a nifty alternative to Supermicro's Xeon-D 15xx cube, and it's a better value than HPe's silly little plastic toy (the EC200a). Considering my daily hypervisor duties ran off the AMD embedded version of a Ryzen 5 2600H (which is plenty for my needs), eeeeeh, I am not hungry to splurge.

I am saving up for the day Apple announce the return of the XServe with a 64 core M1x setup.

(KIDDING, KIDDING!)
 
Last edited:
  • Like
Reactions: JediAcolyte

josh

Active Member
Oct 21, 2013
615
190
43
I looked at these when they were $300+. I gave them a pass. #1 reason being, lack of ECC. Also, the X299 motherboards were pretty expensive for their limited capabilities.

I also considered the X570 WS + 5900X. The mobo was going for $200+ on prime day used. I passed as well as the Ryzens are PCIe lane starved (only 24).

Ended up getting EPYC 7401P + ROME2D-T/EPYC2D-T. ROME2D-T was $300 on prime day used. CPU goes for $300+ as well. Costs half of the X570 combo and 1/3 of a TR Pro setup. Money saved that I can use to buy more DDR4 which seem to have gotten even more expensive. Because ultimately that's all that matters.
 

JediAcolyte

Active Member
May 29, 2020
187
68
28
US
Yeah, @WANg looks like I jumped the gun a bit. I have some AMD systems already running and I thought grabbing something Intel would be a good idea to mix it up. I'm regretting not getting the Epyc already. Next stimulus check or tax return...maybe.
As for needing the cores, I think it's something I'll grow into as I learn more. I will certainly be using this as a nas device until I have something purpose built. I'll try to virtualize trueNAS and use some unimportant files as test material.
I guess I'll go from there and see what I can learn.
 
Last edited:

boomheadshot

Member
Mar 20, 2021
64
3
8
lol @ all of the normies saying that Linus can't be trusted, and that this seller is a shady guy. You can check their website, their alibaba page and their aliexpress page , these guys are legit, I've bought >15 CPU's for them to resell and never had a single problem. I contacted them directly via email/skype, they let me pay via paypal, you can get a slightly lower price, used to be like 20% lower, now it's only like 10-15%. The prices have come up since the video came out, so you will need to wait a little bit.

It's funny how people start scoffing at these deals just because it's way less than the overinflated price, the price was inflated AF in the first place. You probably pay $200 for your T-shirts just to feel you are worth something, lmao.

edit: 9-series 10 and 12 core chips also have more L3 cache than the 7-series counterpart
 
Last edited:

Totalfreq

New Member
Jul 3, 2021
10
0
1
That's not a bad deal at all...the i9-7900X are just the tray version of the commercial i9-9900xe series right?

I may just pick up a i9-7940X for the fun of it...after all it's the "less binned" version of the unicorn i9-9990XE... oh and was there an i9-7990xe? That'd probably be even rarer.

I'm currently building a x299 and everything is pricy. Yes I'm going top end, but still a base mobo is hard to find and if you want to OC or use those higher chips you'll need a good vrm and lots of cooling. If you are going to OC, the cheapest quality for the i9-79/99 x299 series is probably the EVGA dark if you can find it. The Asus sage ws pro II has a pex chip on board a 7x x16 PCIE slots (thats right 112 lanes on a 44 lane chip) but that's what pex switching is for right? I've been wanting that but currently building around an Asus Rampage VI Extreme Encore which wasn't terrible at $750, but rams expensive and everything needs lots of cooling, vroc needs a key for direct 128gb/s on cpu raid configs other than raid 0, and it's only pcie gen 3. It'll probably end up being $10k by the time it's built...so Rome was definitely in the same ballpark...

Not to say intel hasn't slouched in breaking the 5ghz barrier something AMD should be doing being ahead on dies size...but I think intel is missing the boat. They should be leveraging the qpi/upi links to not make more cores per chip like AMD, but allow things like the HEDT to be more like Xeons then i9s, to be dual or quad socket and add support for pcie gen4. 2x HEDT on board riding PCIE4 would give AMD a run for their Money. I'm still using several Dell Precision dual Xeon work stations that are from 2011. Why? 12mb L3 cache per cpu, 8c/16t @ 3.86ghz (x5687) or 12/24t @ 3.73ghz (x5690) and so much (cheap) ram I don't know where to put it all. If it weren't PCIE gen 2 I'd probably never have added new pcs to the stable...rock solid and still crushing it.

What Intel also still hold and advantage is optane as cache....nothing faster than ssd cache in the bus.
 
Last edited:

Totalfreq

New Member
Jul 3, 2021
10
0
1
Oh good to know...it's a skylake 14c though right? I just moved over from xeons and the i7-8086k & i9-9900ks side so this will be my first hedt build.

Wow looked them up. I'd take the 7940x over the 9940x (because of price delta and OC), unless I'm missing something here. They are both skylake, 44 lane gen 3, basically the turbo modes are only +200mhz different...but the tjunction on the 7940 is 102c vs 88c on the 9940...I know tjunction is not relevant to any real purpose for an end user but a 14c delta with identical TDP shows is likely the +200mhz difference or worse a lesser binned silicon. Did the push the clock to create a new "gen" but pulled all the good bins on the 9940x for the 9990xe compared to the i9-7940x?

7940x

Vs

9940x

They pretty much look the same to me...now the 10th gen is a rel gen update.
 
Last edited:

boomheadshot

Member
Mar 20, 2021
64
3
8
They are exactly the same, 7-series and 9-series are identical, only 7-series has TIM and starting from the 9-series you have solder, the only difference between 10-series and 7-series is that the 10 core and 12 core chips have more L3 cache than their 7-series counterparts, and the higher frequencies (because they have solder). So the 7980xe is the same as 9980xe and 10980xe, you just need to delid it and apply LM (you can buy a cheap delidder on aliexpress for like $10), but ideally you want a direct die frame like this (keep in mind that from 12 core and up you need a version with a wider gap because of the bigger die size, this is mentioned in the ebay ad).

Yes, it's insane that you can get a monster of a CPU for X299 for peanuts, but it is offset by the cost of the motherboard/mandatory CPU+VRM waterblock(if you're planning to overclock, and if you're not, then you might as well buy a Ryzen)/no ECC support (and pricier ram)/insane power draw and an extremely costly cooling solution, so at the end of the day you should only look at something like this only if you actually have a serious workstation load AND you want to game on the same PC, but you might as well just get a Ryzen 5600X build for gaming (where any cooler will suffice) and a completely separate PC for other uses.

It's like buying a 10-year-old Mercedes AMG. Yeah, it used to cost a shit-load, yes, it's much cheaper now, but there are so many things to watch out for at which point you start scratching your head, debating whether or not you even want this in the first place.

It makes sense if you can get a motherboard for dirt cheap, and in some countries it's possible. Here in Russia, I've bought a few middle of the barrel mobos for $100 (MSI X299 Tomahawk AC/Gigabyte X299 aorus gaming 7/Asus X299 prime), but I had to hunt hard and wait for the sellers.
On ebay you only get the crappy mobos for $200 with shitty VRMs that will only allow for some light overclocks, so you're best off landing a motherboard deal locally and then building around that.
 

Totalfreq

New Member
Jul 3, 2021
10
0
1
They are exactly the same, 7-series and 9-series are identical, only 7-series has TIM and starting from the 9-series you have solder, the only difference between 10-series and 7-series is that the 10 core and 12 core chips have more L3 cache than their 7-series counterparts, and the higher frequencies (because they have solder). So the 7980xe is the same as 9980xe and 10980xe, you just need to delid it and apply LM (you can buy a cheap delidder on aliexpress for like $10), but ideally you want a direct die frame like this (keep in mind that from 12 core and up you need a version with a wider gap because of the bigger die size, this is mentioned in the ebay ad).

Yes, it's insane that you can get a monster of a CPU for X299 for peanuts, but it is offset by the cost of the motherboard/mandatory CPU+VRM waterblock(if you're planning to overclock, and if you're not, then you might as well buy a Ryzen)/no ECC support (and pricier ram)/insane power draw and an extremely costly cooling solution, so at the end of the day you should only look at something like this only if you actually have a serious workstation load AND you want to game on the same PC, but you might as well just get a Ryzen 5600X build for gaming (where any cooler will suffice) and a completely separate PC for other uses.

It's like buying a 10-year-old Mercedes AMG. Yeah, it used to cost a shit-load, yes, it's much cheaper now, but there are so many things to watch out for at which point you start scratching your head, debating whether or not you even want this in the first place.

It makes sense if you can get a motherboard for dirt cheap, and in some countries it's possible. Here in Russia, I've bought a few middle of the barrel mobos for $100 (MSI X299 Tomahawk AC/Gigabyte X299 aorus gaming 7/Asus X299 prime), but I had to hunt hard and wait for the sellers.
On ebay you only get the crappy mobos for $200 with shitty VRMs that will only allow for some light overclocks, so you're best off landing a motherboard deal locally and then building around that.
OK so that makes a lot of sense. I went digging for odd things like aes-ni support but couldn't find any difference between the 7 an 9 in this case other than the +MHZ off set they looked like the same cpu. But the TIM vs Solder and thermal headroom on the 7 series to me makes it actually more appealing if you want to delid and go for broke.

And yeah my current x299 build will probably be looking at 1500-1600w just for the CPU OC'd to all core turbo 2 and 2x GPUs at XOC on Auto. Wouldnt be terrible but the cooling, larger case, mobo, ram are exponentially more expensive.
 

Totalfreq

New Member
Jul 3, 2021
10
0
1
I'm guessing a comparable Supermicro motherboard and a 1st gen Epyc 16 core for about $800 would get other forum members votes here?
(Plus ECC & IPMI support)

Edit for clarity

Why not a dell R820 quad socket? I like the qpi xeons offer and with the newer scalable upi you can just 2x or 4x any system pretty reasonably even a few gens back are awesome for virtualization and the dells are dirt cheap as everyone runs them.

Something like this:

Usd$1100 ish
Pcie gen 3 x120 lanes
32 cores @ 2.4ghz
128gb ram
2x 900tb enterprise 10k sas.


Or a "dell" Facebook custom order c6100 quad blade dual x5600 with 12x 2.5 bays and mezzanine cards. Great for connecting one blade to a San and running three heads, or having each use their own caching ssd and spinning disks on-board. These had 192gb per blade, dual x5675 3ghz chips fiber channel mezzanine cards. I set mine up with 3 blades as vm heads 12 cores per blade, 16gb ram per core. And one as my storage blade. I used it to cache across 4x 200gb ssds and then write to the 8 drives in a raid 10 2x4 config. Crazy power for the money, you just have to be creative in your search or ask around. Those boxes are found now for like $1-2k pretty loaded and running strong. Power hungry though.. I pulled ram just to save power on a lot of the boards.

Really depends on purpose though..ghz per core, l3, ram, pcie version. Lots of options in both amd and intel.
 
Last edited:

NobleX13

Member
Oct 2, 2014
74
39
18
35
I have been toying with the idea of building one large 16 core box to consolidate some physical machines I have that run various services. But considering that I am running Haswell-based mini PCs that were almost free it's hard to justify the cost. I'm sure there is some power savings to be had, but not that much.

If only DDR4 was a bit less expensive.
 

cw823

Active Member
Jan 14, 2014
414
189
43
Why not a dell R820 quad socket? I like the qpi xeons offer and with the newer scalable upi you can just 2x or 4x any system pretty reasonably even a few gens back are awesome for virtualization and the dells are dirt cheap as everyone runs them.

Something like this:

Usd$1100 ish
Pcie gen 3 x120 lanes
32 cores @ 2.4ghz
128gb ram
2x 900tb enterprise 10k sas.


Or a "dell" Facebook custom order c6100 quad blade dual x5600 with 12x 2.5 bays and mezzanine cards. Great for connecting one blade to a San and running three heads, or having each use their own caching ssd and spinning disks on-board. These had 192gb per blade, dual x5675 3ghz chips fiber channel mezzanine cards. I set mine up with 3 blades as vm heads 12 cores per blade, 16gb ram per core. And one as my storage blade. I used it to cache across 4x 200gb ssds and then write to the 8 drives in a raid 10 2x4 config. Crazy power for the money, you just have to be creative in your search or ask around. Those boxes are found now for like $1-2k pretty loaded and running strong. Power hungry though.. I pulled ram just to save power on a lot of the boards.

Really depends on purpose though..ghz per core, l3, ram, pcie version. Lots of options in both amd and intel.
But it’s not 2007 so 1366 is kind of crazy to spend money on at this point
 

Wasmachineman_NL

Wittgenstein the Supercomputer FTW!
Aug 7, 2019
1,871
617
113
But it’s not 2007 so 1366 is kind of crazy to spend money on at this point
this so much, X58/1366 is a dead platform. Anything below Broadwell/Xeon v4 are pretty much a boat anchor at this point.

Unless you can get it very, VERY cheap (read: free) don't bother.
 

Totalfreq

New Member
Jul 3, 2021
10
0
1
Jedi was asking about a home lab environment using a gen 1 - 16 core ryzen..a T610 or T7500 with dual X5690s would be quite comparable in a lot of virtualzation needs. 80 lanes, 24mb L3 cache, 12c/24t @ 3.6ghz...and a price tag to match the gen 1 ryzen and mobo for $800. Now if you need pcie gen 3 discussion over.

But I found a Dell T7500 just begins to saturate the old xeons using a 1070ti...but your talking a 10% loss benchmarking firestrike.

Compare what I assume was mentioned... a ryzen 1950x to against a SINGLE x5690...then realize most of the x56 were dual socket.


I agree, without a bridge to pcie 3...the x56 is practically dead. But I run a whole host of them as both desktops and servers because price vs performance is hard to match when you get multiple sockets.

That's all I was saying...frankly the x299 hedt being pcie 3 with no qpi/upi makes me wonder if it's already dead...we will see what sapphire brings

Now back to ordering parts for my HEDT build lol

I keep going back and forth on whether its worth it to spend nearly $500 on an i9-7940x as a burner chip to delid and massively OC to stress my infrastructure (PC/cooling loop).
 
Last edited:

alex_stief

Well-Known Member
May 31, 2016
884
312
63
38
Could we please stop comparing CPUs from different decades with completely different architectures based on specs like "L3 cache" and "aggregate frequency"? It just doesn't make any sense.
 

Totalfreq

New Member
Jul 3, 2021
10
0
1
Could we please stop comparing CPUs from different decades with completely different architectures based on specs like "L3 cache" and "aggregate frequency"? It just doesn't make any sense.
For a lab vm box in a chip shortage I was saying you don't have have current gear or spend a fortune to get a VM box with a lot of ram and cores..assuming the need for 16 cores is for.multiple VMs and not a single database. And where there is a lot of multithreading like a VM environment...the ratio of L3 per core is significant as L3 is the queue between fetch and execute cycles. Of course there is a balance of other parts, core speeds, architecture, ram qty and speed, iops, etc...but apples to apples in a well designed system, even generations apart...the higher the L3 per core, the faster it will respond when under load...be it AMD or Intel.

But I digress, I guess I'll get back on point with the OP, Thanks for the heads up on the CPUs, I'll probably be picking up a 7940x to burn in the rig and make sure it can handle the loads adequately before I drop in the main chip.
 
Last edited:

JediAcolyte

Active Member
May 29, 2020
187
68
28
US
@Totalfreq
I was intending to compare with first gen Epyc which would give me more pcie lanes, ECC support and IPMI.
I'm also trying to look at price, performance, and power utilization. A total cost of ownership if you will.
@alex_stief
I agree with you on comparing across generations to an extent. From when AMD dropped out of the CPU space until they began releasing Ryzen and Epyc, Intel's generational improvements were mid single digit percentages at best. So... it takes a few generations for the improvements to be worth the upgrade, IMHO.
 
Last edited:
  • Like
Reactions: Totalfreq