It's theoretically impossible if you dig further in. For the 6 games I tested, none of them has a "data sharing" bound more than 2% on both my Xeon CLX-SP and my friend's 10900K, then when the difference of several nanoseconds go though GPU bound and reflected to FPS, the test result will be well within the margin of error. Simply check the non-K skus, they usually have lower ring freq(more ring latency) but the stock gaming results are all the same with respective K variants.I trust that Ian's findings are true to the current state of things.
However, there could be other issues with the release roms and scheduler that could be causing performance hits.
I would give intel a few weeks to sort things out before calling it DOA, that said... I am going to go tune my 5950x now.
Even if these few nanoseconds does matter, there are still a lot of other things seem off there.
1. Although the scheduler is enlarged, the instruction latencies aren't reported to be increased in Sunny Cove when it's first released, and might be decreased in later revisions of this arch like Cypress Cove.
2. By testing the 14-core ES ICX I saw that it not only has 2 more channels than CLX but also with about 10ns latency decrease, people have also tested mobile version before with similar results, but they got the polar opposite result on the desktop end? The same goes to the cache.
3. Also most importantly, the reported power is at least 1/3 higher than what someone in our ICQ group reported. Here's a picture about 11700K at 5.2GHz running prime95 at 278W. If you roughly calculate its power consumption at 4.0GHz, it will be no more than 125W, which is same as 3700X running Cinebench R15(SSE-only) at the same 4.0GHz frequency.
And my 5800X is about the same power when running CBR15 SSE as their AVX2, while 9900K is about half of their 10700K value running the same Cinebench R15 as my 5800X? Both at stock.
If no paid biased review was going on, I would guess that some components' performance or frequency could be affected by testing version firmware or possibly no-microcode or no-ME-update, for example, bugged memory bandwidth or low ring frequency. But it's not likely as I've never experienced this before even on CRBs, and it's Anand and AMD. The famous Stilt quit that forum 2 years ago just because the toxicity of AMD fans there, and since then I lost trust in them. Also they always bashes the common practices in the industry, like advertising “max IPC uplift over selected apps” here, but not when they review AMD stuffs.
It's just not uncommon nowadays that people brainlessly boast AMD and nerf Intel, making up stuff to back them up(for example, try comparing both Ryzen3000 and i9 9000's power consumption multiplier-to-multiplier you'll see something people don't tell you). And it even comes to personal harassment. Hired supporters targeted even the oldest players like Der8auer or known-good sites like Techpowerup, who has a second thought.
These are dark times for rationality, when people don't even dare to talk. Try to dive deeper into the reality by testing with advanced tools and more realistic workloads. Always profile the benchmarks' workload before using them. I actually bought or borrowed a lot of Xeon and EPYC2 parts in the past year and communicated with several industrial designers and DC planners to try to get the workload right, resulting in a quite large list of tests results, and they explained a lot of things about why people design those CPUs this way, and what those professionals are choosing and why they did that. I shared these with my friend a while ago, but still struggling about whether should I write a public review or not. Try it or not, we must remain ourselves and pursuit the truth.
Sorry for talking about "politics" too much, but something has to be said.
Attachments
-
586.8 KB Views: 10
-
436.3 KB Views: 10