Xeon E5-v4 for 1S workstation? 1660, 2667, 2687w, 2689?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

larrysb

Active Member
Nov 7, 2018
108
49
28
I'm looking to upgrade a workstation currently running a E5-2697-v3 (14 core) CPU, with something with higher frequency for a little more oomph. I've been running another identical machine with an E5-2667v4 and despite being 8-core, it's running through my current workloads a little faster. They're 1S motherboards, so the 1660 (or the rare 1680) would work fine.

There's an embarrassment of riches in pulled Xeon E5-v4 available on the used market. Prices are all over the place, not at all reflective of original pricing or capability. I don't really need a high core-count (which are incredibly priced now). But a frequency optimized part with 8 - 12 cores would be better for my needs.

The 2667v4 is working out pretty well. I've wanted the 2687w, but seemingly so does everyone else. A little less clock speed, but 12-cores.

The e5-2689-v4 10-core appears to be the barn-burner here, but not many of those on the pulled inventory, and asking prices are pretty high. It was an off-roadmap sku, so not sure how many exist.

From a spec-sheet comparison, the 1660 and 2667 appear quite similar, with the 1660's going for a little less money. There's a little less cache on the 1660, but in a 1S situation, probably not an issue.


What do you guys think of 1660-v4 vs the 2667-v4 in a 1S motherboard?

I can wait, the 2697-v3 is still working.
 

Whaaat

Active Member
Jan 31, 2020
304
158
43
What do you guys think of 1660-v4 vs the 2667-v4 in a 1S motherboard?
Very close to what I'm about to compare.
My current workstation is Dell t7810 with 1650v4, I've got it off ebay with this cpu fitted form the factory. Entire workstation cost was about $200 including p&p, so I couldn't resist buying it. CPU alone worth half of the asked price. Six cores running at 3.8GHz at full nonAVX load give plenty of power. AVX all-cores multi is 35 and I didn't ever see CPU power consumption exceeding 90W. Nevertheless I thought about more cores and won a 2687wv4 for $205 a week ago. Still waiting for it to arrive to make a thorough comparison.

For sure 2689v4 is the best of the frequency optimized options among 2600v4. 10 cores mean it has only one ring internally and doesn't require this cluster-on-a-die and snoop agent optimization and tuning bullshit. 10 cores are just right for saturation of 4-channel memory controller, so that cores are not yet struggling for access to memory bandwidth. Price is a sad story though...

I wouldn't bother changing 2667v4 to 1660v4 or vise versa as performance difference will be slim to none. Smaller L3 cache in 1600v4 may be a benefit as it means smaller memory delays for most applications, but this of course depends on your typical workload. Also with 1660v4 you will get Turbo Boost Max 3.0, which require separate driver(!) even with the latest Windows 10 20H2 along with dumb Intel application in which you have to manually choose applications worth an individual Turbo core assignment. I was unable to find any tangible speed profit using this Intel software crap.
 

larrysb

Active Member
Nov 7, 2018
108
49
28
Yeah, the workstation with the 2667v4 is running great. It's actually beating the other workstation with the 2697v3, both more or less the same otherwise, except in some cases where I can really load up the cores with parallel tasks. (compiling for instance).

I'm thinking about upgrading the 2697v3 (14-core) workstation to a higher clock E5-v4, now that they're more available on the used market.

They're both running deep-learning setups with pairs of Turing class GPU's.

I can't see any advantage to the 2667v4 vs. the 1660v4, with the 1660 being cheaper. Lots and lots of 2667v4 on the market. Fewer 1660's and almost no 1680's.

The 2689v4 is a 10-core screamer, but I haven't seen many of them, and the asking price is too high. But it would be my favorite. High clocks, just the right number of cores.

The 2687w is 12 cores, somewhat lower clock than the 2667, but the extra 4 cores are sometimes handy. Plenty of those available, price is moderate.

The odd duck might be the 2697A-v4, 16 cores, higher turbo boost, lower base clock, but tons of them on the market, cheap.
 

larrysb

Active Member
Nov 7, 2018
108
49
28

Yes. It is what Intel calls, "off-roadmap". I'm not sure who they made the 2689v4 for, but 10-cores and scorching clock speeds, high TDP, It's the fastest one in the E5-v4 line up that I know of. Rare. But they're starting to show up.

Another one that's "off-roadmap" is the 2696 v4, which is a 2699 with slightly higher base and turbo clock speeds, and a 165w TDP. It won't run in a lot of servers due to the high TDP. But if you want all those cores and have a 1S workstation board like Asus X99, it is quite the trick. Cheap on the pulls market too.
 

larrysb

Active Member
Nov 7, 2018
108
49
28
Most of the multi-socket server boards had a 150w TDP limitation, or even lower in a lot of cases. There were exceptions, of course.

It's kind of interesting with the off-roadmap parts starting to show up as they're decommissioned. Hyperscalers who bought enough volume could get their own special parts from Intel, as could big OEM's who bought enough volume of standard parts.
 

larrysb

Active Member
Nov 7, 2018
108
49
28
Picked up a very inexpensive E5-1660v4 and swapped out the 2697v3. My multicore benchmarks dropped of course (14c/28t to 8c/16t), but the more common tasks are faster and idle power is reduced. (v3 to v4 14nm die shrink). It seems completely equivalent to the other workstation running the 2667v4 in every way.

If the price of pulls comes down, I may grab a 2689v4. They're just too expensive right now.
 

XeonLab

Member
Aug 14, 2016
40
13
8
Yes. It is what Intel calls, "off-roadmap". I'm not sure who they made the 2689v4 for, but 10-cores and scorching clock speeds, high TDP, It's the fastest one in the E5-v4 line up that I know of. Rare. But they're starting to show up.

Another one that's "off-roadmap" is the 2696 v4, which is a 2699 with slightly higher base and turbo clock speeds, and a 165w TDP. It won't run in a lot of servers due to the high TDP. But if you want all those cores and have a 1S workstation board like Asus X99, it is quite the trick. Cheap on the pulls market too.
I've been toying around a bit about upgrading my 1P workstation (E5-2618L v3), as the prices for Haswell-EP/Broadwell-EP CPU's have come down lately due to decoms and if I wanted to go down the 2P route, suitable motherboards are still available. EPYC is still a bit expensive platform and Skylake-SP / Cascade Lake-SP are ruined by Intel's bean counters replacing indium TIM with toothpaste which might be fine for low TDP CPU's for the first 3 years but not us who buy high-end chips after they've been at full beans in datacenter for 5 years.
 

larrysb

Active Member
Nov 7, 2018
108
49
28
Actually the TIM paste wasn't a bean-counter move. It is about as effective as the indium solder. People have de-lidded them and replaced it with solder, but with little improvement. The Skylake X gen-2 got indium solder for the enthusiast products due to internet pressure (whining), but it didn't help thermals. Intel's engineering is quite good most of the time. The ckufery usually comes into play in marketing and product segmentation. ( as does Nvidia and most others)

The main engineering advantage to indium solder is that it doesn't "pump out" over time. The TIM pastes have gotten tremendously better in recent years in that respect. TIM paste can also be more precisely pad-printed in production and doesn't require the heat cycle that solder does. Improves yields in packaging, works about as well, so they went with paste.

As far as I'm concerned, the Broadwell EP is one of the best workstation-class CPU's out there, still today, for those that need PCIe lanes, memory, ECC and stable performance. It is well-suited for GPU compute on the desktop too, where you need good PCIe and memory bandwidth and CPU is not the primary bottleneck.

Skylake W21xx/W22xx are the 'Xeon' flavor of the Skylake X HEDT lineup. The main advantage over the Skylake consumer parts are the motherboards, where the gamer stuff is gotten to be garbage these days. Intel market-segmented X299 away from C422 chipsets. Otherwise identical mobos differ only in part numbers, but won't cross-support xeon/hedt as x99 did. The used market isn't going to pan out well there. Too much segmentation and not enough units sold. Had skylake gotten a die-shrink earlier in the game to rope in the power, I think it might have done better. It's also got the new internal 'fabric' PCIe complex, which is fine for CPU-bound stuff and virtualization, not so much for GPU compute.

Then we get Xeon Scalp-able (pun) which you need a secret decoder ring to sort out the 800 or 900 different sku's of that.

When you get there($$$), you might as well go AMD. (>bang/$$$)

It will be interesting in a few years when AMD servers go decomm and the CPU pulls are bios-locked and worth their weight as scrap metal.
 
  • Like
Reactions: Whaaat

XeonLab

Member
Aug 14, 2016
40
13
8
Actually the TIM paste wasn't a bean-counter move. It is about as effective as the indium solder. People have de-lidded them and replaced it with solder, but with little improvement. The Skylake X gen-2 got indium solder for the enthusiast products due to internet pressure (whining), but it didn't help thermals. Intel's engineering is quite good most of the time. The ckufery usually comes into play in marketing and product segmentation. ( as does Nvidia and most others)

The main engineering advantage to indium solder is that it doesn't "pump out" over time. The TIM pastes have gotten tremendously better in recent years in that respect. TIM paste can also be more precisely pad-printed in production and doesn't require the heat cycle that solder does. Improves yields in packaging, works about as well, so they went with paste.
....
While I don't want turn this into a heated (pun intended) argument, I beg to differ with you about the TIM question.

The thermal paste TIM simply can't be as effective as indium or liquid metal nor have an adequate lifetime performace as they dry out sooner or later. The absolute performance difference varies, but even with relatively low TDP CPU we can talk about 5c difference as you can see from the video below:


We could talk about engineering, indium and difficulties with thin PCB and thermal paste qualities all night, but there's three things here that make me smell a rat here (and it's not Intel's fixation with 60% margin):

1. AMD still uses soldered indium TIM trough the whole lineup, why they still do that if paste TIM's are superior in every regard?

2. Intel abandoned indium TIM from Ivy Bridge consumer CPU's onwards but kept it in the server CPU's until Skylake-SP. Why they would keep the indium TIM in big, low-yielidng server CPU's which to my knowledge are more difficult to solder than small-core desktop CPU's?

3. Seemingly Intel has gone back in some extent back to indium TIM even in upcoming server CPU's


Why they would do it now when they have struggled massively with 10nm process and why risk the low-yielding 10nm cores with "risky" soldering process?

Edit. Cooper Lake and the CPU in the article is 14nm, but my argument is still valid about re-introducing indium TIM in server CPU's.
 

XeonLab

Member
Aug 14, 2016
40
13
8
Well, you can differ if you like. I happen to know this for certain. :)
Then tell me why thermal performance improved several degress in the video I posted when original TIM was replaced with liquid metal? Or why has Intel as least tested it in some LGA4189 CPU's? Or why AMD still uses indium TIM if there's no advantages doing so?
 

larrysb

Active Member
Nov 7, 2018
108
49
28
Then tell me why thermal performance improved several degress in the video I posted when original TIM was replaced with liquid metal? Or why has Intel as least tested it in some LGA4189 CPU's? Or why AMD still uses indium TIM if there's no advantages doing so?

Let me fill you in.

Gallium is not Indium. They are very, very different materials.

First, in the link, debaur used gallium alloy, aka, "liquid metal". It is a room temperature liquid.

However, gallium is electrically-conductive and is corrosive. Being liquid means it fills gaps very nicely, which is why it is so effective. However gallium is entirely unsuitable for a production component for those and other reasons. It migrates over time, it will not stay where it is put. It attacks other metals and it is conductive. That's why it isn't used.

Intel, AMD and others have traditionally applied indium solder between die and heat-spreader. The alloys used have about 30 W/(m-k). That's higher than TIM paste. The primary advantage of indium solder is longevity. Once it is re-flowed in the soldering oven, it is stable and doesn't migrate or pump out over extended time.

The TIM paste Intel used in assembling die-to-heat spreader is about 7-8 W/(m-k). While the number is lower, and that gets the internet all wee-wee'd up, it is sufficient to the job just fine. One of the reasons is that is applied very thinly and it fills gaps better between die and heat spreader. It also doesn't require the interface plating on the heatspreader or any kind of flux. It is easier to control in production and provides consistent results.

Every part that Intel shipped with TIM paste, met all of their published specifications. In practice, the TIM paste worked just as well as indium solder for most applications. It is the truth, whether people accept it or not.

Intel wouldn't have gone the considerable expense and trouble of changing the equipment on a packaging line to TIM paste without thoroughly testing and proving that internally. The changeover of process equipment is a very, very big deal. It doesn't happen on a whim.

There are examples of people who have de-lidded the i9-X processor and attempted to solder it themselves with indium and not gotten anywhere with it.

The thermals from the second rev of the i9-extreme where solder was used again didn't produce mythical improvements either.

As a matter of reality, you would gain better thermal performance from thinning the die than the difference between indium solder and TIM paste....

And pretty much everyone uses TIM between the heat spreader and their cooling solution in their product or computer anyway, so what it is the big deal?
 
  • Like
Reactions: Rand__ and abq

XeonLab

Member
Aug 14, 2016
40
13
8
I appreciate your knowledge in this subject, and your argument about paste TIM being good enough for 3/5/7-year server lifetime is valid. And to make things clear, I should have written that in the video I posted "gallium which is almost equal in heat dissipation to factory indium solder", I didn't intend it should be used as a indium replacement. But there's no denying if you want absolute top thermal performance and longevity, soldered indium TIM is the way to go.

However, if you for some reason or another (NDA's etc.)don't want to answer my questions about why AMD still uses indium TIM or why Intel is at least in some extent considering them for next-gen big-die server CPU's, I'm completely fine with it. Let's not take this argument too far, we are only talking computers here. :)
 

larrysb

Active Member
Nov 7, 2018
108
49
28
All in good spirits! :)

I can't speak to why AMD is or isn't using whatever they're doing. They may feel it works better, or that they're customers want it, or they don't see any reason to bother changing an established process on their production line because they've always done it that way.

There are all kinds of ways deal with the thermal interface between the die and the heatspreader.

TIM paste isn't the evil it is made out to be.
 

KC8FLB

Member
Aug 12, 2018
71
55
18
I am in the middle of the same research as you. I made a google sheets with all E5-X6XX V3 & V4 procs, comparing cores, speeds (max vs. all core) and most important - used sold prices on Ebay to determine relative value. Explore and let me know if it helps you and others:


Feedback welcome and appreciated.
 
  • Like
Reactions: TRACKER

Whaaat

Active Member
Jan 31, 2020
304
158
43
Feedback welcome and appreciated.
Very good, but you are not covering AVX turbo frequency multipliers. For some SKUs AXV all-core turbo may be even below non-AVX base frequency. Keep this in mind choosing better option for your specific workload. Mine are 90% AVX related, this is where TDP and AVX turbo come into play.
avx_freq.JPG
 
  • Like
Reactions: KC8FLB

KC8FLB

Member
Aug 12, 2018
71
55
18
Very good, but you are not covering AVX turbo frequency multipliers....[/SPOILER]
I have added the AVX base, max and all core frequencies to the table for the entries I could find. Thanks for giving me something else to learn/dig into. Now I am going to read about what types of real word tasks/applications I would expect AVX to be used on.
 

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,640
2,057
113
I have added the AVX base, max and all core frequencies to the table for the entries I could find. Thanks for giving me something else to learn/dig into. Now I am going to read about what types of real word tasks/applications I would expect AVX to be used on.
Please share that too :) I'm curious if it's possible to put together a useful list.