4th Gen Intel Xeon Scalable Sapphire Rapids Leaps Forward

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

MiniKnight

Well-Known Member
Mar 30, 2012
3,072
973
113
NYC
I haven't been as active on the forums lately, but this was excellent. You've done by far the best analysis.
 
  • Like
Reactions: AdrianBc

ano

Well-Known Member
Nov 7, 2022
649
269
63
Good thing I didnt just get sevral racks of gen10plus with 6354's I orderd years ago.... ;)

wonder when gen11 delivery is, we haven not received any genoa orders yet either, but delivery on 3rd gen intel on supermicro was at least 2021, and hpe mid/late 2022.
 

Stephan

Well-Known Member
Apr 21, 2017
923
700
93
Germany
I hope the invisible hand of Mr Market punishes Intel for this SKU madness. Worked for VMware.

Meanwhile their accelerators are beaten in performance/watt or performance/cost from competitors. You pay 17000 for highest tier CPU, and then some for an accelerator license, to find out the 10 bucks Google Coral 1 watt chip still beats your CPU in MobileNet in your Frigate installation.

In 5 years we will see an 8-core RISC V machine out of China, resembling Skylake performance, with ECC, PCI 5.0, onboard 25/100 Gbps, IPMI, and CoreBoot.
 

AdrianBc

New Member
Mar 29, 2021
28
16
3
Great Job Patrick!

But honestly, this SKU variety is crazy. As I don't have a PhD in Intel-tiering I find it kind of overwhelming....

The problem with the SKU variety is not that they exist.

The problem is that they make it very hard to predict the performance of the actual SKU that you might be able to buy.

Everybody publishes benchmarks for the top Intel SKUs, which might match or even exceed the performance of the previous generation of AMD CPUs.

However any individual or small business will be able to buy only SKUs that differ greatly from those benchmarked by being crippled in so many different ways, lower base frequencies, disabled accelerators, lower memory frequencies and so on, that it becomes impossible to estimate the performance of the desired Intel SKU for the buyer's application and compare it with an AMD SKU of similar price, to decide which is worth buying.
 
  • Like
Reactions: Styp and ano
May 20, 2020
40
26
18
The problem with the SKU variety is not that they exist.

The problem is that they make it very hard to predict the performance of the actual SKU that you might be able to buy.

Everybody publishes benchmarks for the top Intel SKUs, which might match or even exceed the performance of the previous generation of AMD CPUs.

However any individual or small business will be able to buy only SKUs that differ greatly from those benchmarked by being crippled in so many different ways, lower base frequencies, disabled accelerators, lower memory frequencies and so on, that it becomes impossible to estimate the performance of the desired Intel SKU for the buyer's application and compare it with an AMD SKU of similar price, to decide which is worth buying.
Agree 100% with this. I have no idea how representative our small manufacturing company is, but for licensing reasons we're likely to go 1 socket, 16 or 24 cores next gen for a general purpose VM server. That's the head to head I'd like to see across the last 2 generations for each AMD and Intel.
 
  • Like
Reactions: AdrianBc

i386

Well-Known Member
Mar 18, 2016
4,241
1,546
113
34
Germany
This is the official SKU list we have from Intel, with a total of 52 different SKUs.
I tried to look for specific workloads (workstation at home, file/media server for home, (file|application|virtualization)server for work) and I find it hard/impossible to choose a "correct" cpu without the feeling of missing something ._.
In a world of per-core licensed software
Alternatively, Intel is working with a few partners to implement a metering adoption model in which On Demand features can be turned on and off when needed and payment is based on usage versus a one-time licensing.
Are oracle and vmware partners too? This could create new opportunities to make more money.
On Demand
I'm wondering how the licensing is implemented/applied and if it could be reverse engineered (for research and homelabs ;))
 

NablaSquaredG

Layer 1 Magician
Aug 17, 2020
1,338
811
113
I'm wondering how the licensing is implemented/applied and if it could be reverse engineered (for research and homelabs ;))
Without having looked into what Intel has done (e.g.. the Linux drivers)...

If I were to implement such a feature, I would implement it in some kind of extra core that is there anyway (like AMD PSP or Intel ME) and based on Public-Key Infrastructure.
You buy the feature from Intel and get a certificate based on CPU serial number which is signed by Intel (e.g. RSA 4096 or Ed25519), which is sent to the extra core, which verifies and then enables the feature.
Impossible to crack, or rather so expensive that it would make more sense to just buy the features... Except if, you know, they backdoor themselves with the usual "oh there is a magic test BIOS that unlocks it" or whatever...
 

Stephan

Well-Known Member
Apr 21, 2017
923
700
93
Germany
Hash with SHA 384, ECDSA signature using P-384 curve. Verified from within microcode using on-chip function blocks I presume to reduce ucode size. Not that I care. I'll buy such chips only after years when all the serious bugs in CPU and boards are ironed out. For 400 bucks from ebay, dumped from hyperscalers. Which is probably what they paid in the first place in 2023. If I needed to train an AI model or something I'd buy a few Nvidia cards.
 

Bert

Well-Known Member
Mar 31, 2018
840
392
63
45
Honestly it is scary to see Intel going into the direction what they could do better as opposed to what customers need. Working around software license's has been a long game. IIRC, the story goes as:
License per machine ==> introduce multi socket machines by making motherboards bigger
License per socket ===> introduce high cores per socket by making sockets bigger
License per core ===> Offload work from general purpose cores to custom chips.

The last one is also driven by the need of super scalers. With every bit of efficiency matters, super scalers have been offloading work from CPU to ASICs, custom chips etc.

What I don't understand why they have to be packaged together. Is latency really critical here?
 

BoredSysadmin

Not affiliated with Maxell
Mar 2, 2019
1,053
437
83
The problem with the SKU variety is not that they exist.

The problem is that they make it very hard to predict the performance of the actual SKU that you might be able to buy.

Everybody publishes benchmarks for the top Intel SKUs, which might match or even exceed the performance of the previous generation of AMD CPUs.

However any individual or small business will be able to buy only SKUs that differ greatly from those benchmarked by being crippled in so many different ways, lower base frequencies, disabled accelerators, lower memory frequencies and so on, that it becomes impossible to estimate the performance of the desired Intel SKU for the buyer's application and compare it with an AMD SKU of similar price, to decide which is worth buying.
Predicting CPU/GPU performance based on SKU/specs has been an exercise in futility for a long while. Passmark isn't an ideal yardstick, but it is a yardstick that provides at least some idea of SKU performance.
 

Edu

Member
Aug 8, 2017
55
8
8
33
Great article.
Quick question: in the nginx CDN test on page 11, is the QAT accelerator being used on the Xeon 8490H?
 

unwind-protect

Active Member
Mar 7, 2016
415
156
43
Boston
Anybody knows of any benchmarks published that compare machine learning performance of Sapphire Rapids versus GPUs? In say, Torch?

I have no idea how a single SR core would compare to a GPU. Would like to change that.
 

Stephan

Well-Known Member
Apr 21, 2017
923
700
93
Germany
@unwind-protect Short answer is 10-30 times more performance/cost of GPU vs Intel CPUs. Depending on cleverness of implementer and sales price of course. But that's the ball park.

Read: Intel's Xeon Cascade Lake vs. NVIDIA Turing: An Analysis in AI (entire article, too)

Also an overview: Hardware for Deep Learning. Part 1: Introduction

And here is the cost/performance for a ton of GPUs: The Best GPUs for Deep Learning in 2023 — An In-depth Analysis

Author sure likes the A6000 Ada. 10k per card.