AMD making a fool of Threadripper customers - AGAIN?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

111alan

Active Member
Mar 11, 2019
290
107
43
Haerbing Institution of Technology
If I may interject here, I think some ppl may be overlooking something:
You don't necessarily have to pay people these days to spout lies and half-truths on the internet. You don't even have to pay them to shill for a specific company. So I would be rather reluctant to point fingers at fanbois shouting "paid shill". The vast majority of them do it without getting paid, or being explicitly told to do so.
My thoughts were the same, until I got some critical proof.
 

Patriot

Moderator
Apr 18, 2011
1,450
789
113
My thoughts were the same, until I got some critical proof.
You do not have critical proof, you have circumstantial evidence at best. And your first dozen comments came off as super spam heavy.... and nearly got you booted. You seem to get caught up in fanboi wars you and complaining about censorship on a Chinese website is... amusing to say the least.
 
  • Like
Reactions: RageBone and edge

msg7086

Active Member
May 2, 2017
423
148
43
36
AVX2 in this case.
OK. From the information you provided, I think in this case it has something to do with thermal and turbo throttling. The AVX2 based binary could have performed better if the AVX2 code was not brought down on CPU frequency by the AVX512 code.

Under the same thermal budget, AVX2 will be slower than AVX512 but not by a huge amount. AVX512 runs twice as fast per CPU clock but it downclocks. In your first benchmark, AVX2 has to be running at the same clock as AVX512 core code, and that penalty is huge. BTW, I would be interested to know how good it performs on a Ryzen since at least Zen3 should have full AVX2 speed and doesn't downclock like Intel does.

Now, back to the original debate. I claimed that the core code is in assembly and thus comparing the speed of x265 is not a valid proof of innocence on Intel side. It still holds true given that the worst case is 5% difference. While it's beyond measurement error, it's still not a valid proof. If Intel has actually compiled crappy code for the C code part, you'll still see < 10% difference. (Might actually be less, since Ryzen runs AVX2 code at a higher clock than Intel.)

To demonstrate, you should disable all ASM optimization, i.e. --no-asm , then measure the numbers on both platforms.

Also I just want to be clear. My intention was never to claim that Intel compiler is guilty here. They used to be, as there was a legal battle and Intel was forced to stop "cripple AMD". They presumably fixed the issue after settling the court case. My intention has always been that proofs can still be biased based on people's knowledge and opinions. You and I are no experts in CPU designing, and I would say even LTT or GN teams are not experts at those areas. Maybe Lisa Su qualifies. If numbers look good to you, you could just take it out as a "proof" without knowing why the numbers look like that. That's why I would put a question mark on the "proofs" since "proofs" may not be true proofs.
 
  • Like
Reactions: RageBone

ReturnedSword

Active Member
Jun 15, 2018
526
235
43
Santa Monica, CA
Yeah why don't people just buy their favored product? Selecting product based on own preferences is reasonable all and well, until you start to notice, why people prefer one thing to another, how does they make the decision?

People usually can't test everything themselves. So their decision is usually based on other's tests, reviews, datasheets or simply other's opinion. Then when someone start to pour large amounts of fake data and unreasonably biased opinions into the public media, and start attacking anyone who disagrees, it becomes a problem.

If 3 people start to convey the same idea, the listener may begin to change his mind, although the idea they tried to convey may be completely false and ridiculous. Such is human nature.
I really disagree with this.

It is more “Is the person who paid for the platform happy with what they got?”

If they are unhappy after the fact, then it’s time to consider buying a competing product the next time. It’s that simple tbh. The buyer’s money speaks, not shouting on Internet forums.

There have been many instances where I wanted this feature or that support, and I had to take/use what was available to me.
 
  • Like
Reactions: RageBone

111alan

Active Member
Mar 11, 2019
290
107
43
Haerbing Institution of Technology
OK. From the information you provided, I think in this case it has something to do with thermal and turbo throttling. The AVX2 based binary could have performed better if the AVX2 code was not brought down on CPU frequency by the AVX512 code.

Under the same thermal budget, AVX2 will be slower than AVX512 but not by a huge amount. AVX512 runs twice as fast per CPU clock but it downclocks. In your first benchmark, AVX2 has to be running at the same clock as AVX512 core code, and that penalty is huge. BTW, I would be interested to know how good it performs on a Ryzen since at least Zen3 should have full AVX2 speed and doesn't downclock like Intel does.

Now, back to the original debate. I claimed that the core code is in assembly and thus comparing the speed of x265 is not a valid proof of innocence on Intel side. It still holds true given that the worst case is 5% difference. While it's beyond measurement error, it's still not a valid proof. If Intel has actually compiled crappy code for the C code part, you'll still see < 10% difference. (Might actually be less, since Ryzen runs AVX2 code at a higher clock than Intel.)

To demonstrate, you should disable all ASM optimization, i.e. --no-asm , then measure the numbers on both platforms.

Also I just want to be clear. My intention was never to claim that Intel compiler is guilty here. They used to be, as there was a legal battle and Intel was forced to stop "cripple AMD". They presumably fixed the issue after settling the court case. My intention has always been that proofs can still be biased based on people's knowledge and opinions. You and I are no experts in CPU designing, and I would say even LTT or GN teams are not experts at those areas. Maybe Lisa Su qualifies. If numbers look good to you, you could just take it out as a "proof" without knowing why the numbers look like that. That's why I would put a question mark on the "proofs" since "proofs" may not be true proofs.
That 5% is just from switching from AVX512 to avx2 while compiling, when running on the same intel platform. I know the core code are not affected, but the difference due to other parts, I think, do show something.

The AVX512 on Icelake behaves differently from Skylake and Cascadelake. Icelake does not decrease the frequency during light AVX512 usages, but when usages are high, frequency will still drop. So the "pure" avx512 tests runs at lower frequency than no-avx512-asm one.

I agree that this alone won't make a definite proof, and I will try no asm method when I get another AMD platform. But it at least shows a part of the truth, and I've also tested some other things built by intel, never seeing any intentional limitations against other vendors. I think people should be able to communicate their ideas and show briefly why they think so, that's what forum do, right?

BTW I've never seen anyone deem someone AMD fan just because he/she selected AMD product. From what I see, what people called "intel fans" are actually people who were fed up with those deceptions and internet violence.
 

111alan

Active Member
Mar 11, 2019
290
107
43
Haerbing Institution of Technology
You do not have critical proof, you have circumstantial evidence at best. And your first dozen comments came off as super spam heavy.... and nearly got you booted. You seem to get caught up in fanboi wars you and complaining about censorship on a Chinese website is... amusing to say the least.
You have a point. Censorship is not really limited to politics these days. Perhaps I should never have trusted any of them from the beginning, when there are signs.

I did not want to spam, just there are too much to show .
 

111alan

Active Member
Mar 11, 2019
290
107
43
Haerbing Institution of Technology
I really disagree with this.

It is more “Is the person who paid for the platform happy with what they got?”

If they are unhappy after the fact, then it’s time to consider buying a competing product the next time. It’s that simple tbh. The buyer’s money speaks, not shouting on Internet forums.

There have been many instances where I wanted this feature or that support, and I had to take/use what was available to me.
Perhaps you overestimated people's rationality and underestimated the autonarcosis effect.

For example, there are little-to-no difference between a normal SSD and a high-end desktop SSD in normal daily usage. Of course, high-end desktop ssds have more sequential speed, but it's often not the professionals who need this characteristic who bought these. Most people buy these because they think it's good, because they saw the benchmark, not because it's actually good for them.

Now comes the bad part. Vendors now use cheating methods like large write-back cache and memory-cache to increase that benchmark, to make the product looks high-end. The only thing this does, is to make the benchmarks and copying speed looks better, and mess up the GC algorithm, damaging the data integrity in some situations. There is no real benefit to the user.

Now back on topic, about CPU. One of my friends uses EPYC only to play games. When I asked him why he simply say that, it has a very high Cinebench score, so it's definitely better than anything else. It's only when I show him the apple-to-apple comparison did he know that his Minecraft lost a lot of frames due to EPYC's interconnection latency issue. There are a lot of examples like this, when people don't have a concept about performance, or they just choose to fool themselves. Cmon everyone seems to say it's good, why should I think otherwise.

I know, unboxing and benchmarking do have some value. But nowadays a lot of things are designed to run a certain set of benchmarks well, sometimes in expense of real-world performance. And there will always be a lot of people who just boast these and ignore anything else. They teach actual users how to only look at certain benchmarks and ignore what they actually get.

Then these product sells, others don't. Then vendors all start doing like this. Like intel, whose desktop units have larger, faster L3 cache and "benchmark cores" to gain more advantages in benchmarks like CPUZ and Cinebench, while their server lineup has more L2 cache and a fully interconnected Mesh, to actually increase the IPC in most real scenarios, from my testing.

All what I'm saying is, not everyone is rational, the feeling of "being happy" could be manipulated, and unfortunately the market is shifted by everyone including those manipulated ones. People can become irrational and fool themselves, if they start to build a religion. You may not be that kind of people, but I think we can all agree that we do not want a market driven by cognitive biases.
 

RageBone

Active Member
Jul 11, 2017
617
159
43
[...] his Minecraft lost a lot of frames due to EPYC's interconnection latency issue.
That is a very weird and specific thing to pick as the example you show.
And i find your conclusion that it Must be the Interconnect latency, very very weak.
I find it to be way more likely that the way lower base and boost clocks are to blame.
Or of course, if it is a Naples even more so.

Were high CB points really his only reason? i find that unlikely.
 
Last edited:
  • Like
Reactions: ReturnedSword

ReturnedSword

Active Member
Jun 15, 2018
526
235
43
Santa Monica, CA
@111alan Rationality and “auto-narcosis” has nothing to do with anything for a serious user and serious buyer. Serious types of people don’t have the time to worry about these things. They also wouldn’t get tricked that easily, because as you may infer from my polite replies thus far, I am implying that the people who get upset are those who expected magic and specialness, and then realized they got a normal product, with a set of limitations like any other product. Yes, even products from Intel have limitations.

The SSD example is a bit irrelevant. It’s well known that SSD firmware may be in a way where the SLC cache presents the best “face” of the performance. Many English language reviews point this out specifically. I have not followed up on Chinese language reviews for a while, but Chinese or not, if someone is so gullible and naive to believe stated performance based on the seller or manufacturer’s Public Relations marketing then tbh they deserve to have their money taken away from them.

Still, as I said before, if the user is happy, then that’s enough. If they’re not happy they are welcome to go buy the other product from another company. It’s that simple.

Why people get unhappy with an expensive product they bought? I think it’s quite clear. They bought into a hype and bought products that are too expensive for their budget, they don’t understand, possibly don’t even use 50% capabilities, because it’s “the best.” Then some time later they find out that by god, there’s some other product that was actually “better” or their product is “missing” some feature. They are mad because their product no longer feels special and the best.

What serious person has time for that? Do the best research a person can do, buy within the budget, and call it a day.
 
  • Like
Reactions: RageBone

111alan

Active Member
Mar 11, 2019
290
107
43
Haerbing Institution of Technology
That is a very weird and specific thing to pick as the example you show.
And i find your your conclusion that it Must be the Interconnect latency, very very weak.
I find it to be way more likely that the way lower base and boost clocks are to blame.
Or of course, if it is a Naples even more so.

Were high CB points really his only reason? i find that unlikely.
The interconnection speed I say mainly manifests on memory latency, because Minecraft is a single/dual-core game.

Look at what's captured with hardware-event based sampling. We got a very high memory latency bound. That 18.7% of the clock ticks affected by memory latency, even on Intel platform. Let's see how it will do to performance, when we double that latency on EPYC. It's not about L3 though, 8259's 33MB L3 is more than enough for a single/dual core game, with a hit rate of about 83%.70%, still very optimal.

And I forgot to talk about front end. It's also a factor, AMD's current architectures only have 4 decoders, while Intel have 5(or 6, on golden cove).

It's not about frequency, EPYC2 7R32 has a frequency of 3.1-3.2GHz when fully loaded, it's a little more than 8259CL. But it's much weaker in gaming than the latter.

20.JPG

Just do some social research, most people decide CPU performance and so-called IPC basically only by Cinebench and CPU-Z benchmark scores. There aren't many people who have the concept of "use cases".

Update: 83% L3 hit rate is for ice lake, 70% is for cascade lake. This architectural update wiped out half of the memory bound and 4% of the dram latency bound here lol.

did not test AMD because their uprof does not support L3 hit rate measurement. But in four cases I tested, both their L1 and L2 hit rate are lower than Skylake-client.
 
Last edited:

111alan

Active Member
Mar 11, 2019
290
107
43
Haerbing Institution of Technology
@111alan Rationality and “auto-narcosis” has nothing to do with anything for a serious user and serious buyer. Serious types of people don’t have the time to worry about these things. They also wouldn’t get tricked that easily, because as you may infer from my polite replies thus far, I am implying that the people who get upset are those who expected magic and specialness, and then realized they got a normal product, with a set of limitations like any other product. Yes, even products from Intel have limitations.

The SSD example is a bit irrelevant. It’s well known that SSD firmware may be in a way where the SLC cache presents the best “face” of the performance. Many English language reviews point this out specifically. I have not followed up on Chinese language reviews for a while, but Chinese or not, if someone is so gullible and naive to believe stated performance based on the seller or manufacturer’s Public Relations marketing then tbh they deserve to have their money taken away from them.

Still, as I said before, if the user is happy, then that’s enough. If they’re not happy they are welcome to go buy the other product from another company. It’s that simple.

Why people get unhappy with an expensive product they bought? I think it’s quite clear. They bought into a hype and bought products that are too expensive for their budget, they don’t understand, possibly don’t even use 50% capabilities, because it’s “the best.” Then some time later they find out that by god, there’s some other product that was actually “better” or their product is “missing” some feature. They are mad because their product no longer feels special and the best.

What serious person has time for that? Do the best research a person can do, buy within the budget, and call it a day.
If only the crowd are as reasonable as you say, then this conversation may never happen from the start.

If you mean these people's choices won't affect rational users, here are two things to note:

1. I already talked about this. Hardware companies will focus more on propaganda, and how to make products that's easier to brag about, instead of making real good products. They may even lift up the price to make people think their product is more "premium" than the competition. Look closer, you can see, vendors are already doing that.

In the end we may have products with little perf uplift we actually want, but a huge price tag. May not happen to you now, but look at PC gaming, is there actually a game now that can't be played well with a 2018 Skylake 6c? That thing only sells about 400$ with premium motherboards usually costing 200-300$ back then.

2. HW community may shrink, we may no longer be able to talk about technology and performance assessment peacefully. As I mentioned before China is already fallen in this aspect. Liars are everywhere, people are faking everything for fanboy's favor and sponsorship. You simply can't express any opinions about certain vendors without waging a war.

And don't be so sure that this can't spread. Remember that Der8auer and Stilt example before? Even people as experienced as them get their own share of internet violence.

I've seen way too many communities, too much rationality destroyed by lies and attacks. That's why I'm posting all these stuffs.
 
Last edited:

RageBone

Active Member
Jul 11, 2017
617
159
43
I've seen way too many communities, too much rationality destroyed by lies and attacks. That's why I'm posting all these stuffs.
I find this weird, i think you have sufficiently described the issues on for example Chiphell.
You may have noticed, my questions in the beginning were for very specific issues i had with your exclamations.

That Chiphell is not a place for a reasonable discussion i can believe, but i don't need prove because i frankly don't care.
In the end, Chiphell is not STH, don't carry over those issues.

So, i can concede to you that you have made your point about the issues on Chiphell that stymied you there and i can emphasize.
But please answer me what that has to do with us here right now?

If i were malicious, i would interpret your responses so far as:
You claiming all we spout is the Propaganda nonsense we get paid to say by AMD.
Hence, you only need to say its propaganda and here is a chart of real-world "Benchmarks" to disprove our nonsense.

If that really were how you see this here, i think i can then fully understand why you react this way.

I really hope it isn't, that'd be difficult to say the least.
 

111alan

Active Member
Mar 11, 2019
290
107
43
Haerbing Institution of Technology
If i were malicious, i would interpret your responses so far as:
You claiming all we spout is the Propaganda nonsense we get paid to say by AMD.
Hence, you only need to say its propaganda and here is a chart of real-world "Benchmarks" to disprove our nonsense.
Of course not. If someone is confirmed to be an AMD fan or paid to speak, I will not talk to him, at all. It's just pointless. If an environment allows this person to bamboozle others, I'll leave that environment. But I'm still here.

I talked about Chiphell just to say that AMD is at least very likely to be a customer of its paid speech-control "service".

And if a businessman finds a method effective, they will replicate it. Even if it's just some fans' activity, these fans will also spread the disease everywhere.

So, I talked about all of these because I think people must know what's happening now, and always have their guards up, wherever they are. I've seen quite a few rational people fall for the continues barrage of false propaganda and accusation, people can change, given time no one can stay the same.
 
Last edited:
  • Like
Reactions: RageBone

msg7086

Active Member
May 2, 2017
423
148
43
36
I'm not even sure what you guys are talking about right now, avx2, avx512 or chinese trolls lol. As a bystander.
To sum it up, it all started by the strong statement: But on the outside AMD fans are famously notorious, fabricating test data and bending truths with their explanation to their needs, while swarm attacking everyone who have second thoughts. There are also a lot of evidences indicating AMD paying independent reviewers and commenters for its reputation campaign. 111alan has tried his best to tell us that he is right and people who don't agree with him are wrong. One of this proof was to show that Intel C Compiler didn't cripple AMD, where he measures the performance of x265, a tool mostly written in assembly, and claims that Intel was innocent.

Hope this helps those who don't want to read pages.
 
Last edited:

tinfoil3d

QSFP28
May 11, 2020
873
400
63
Japan
@msg7086 Thank you sir. I initially thought this was about amd dropping compatibility for newer TR with older generations. And what we think of it. I think of it "it's just business", like with any other company out there. Maybe Framework company is different, time will show. TR isn't even AMD's main business. Who cares about end users anyway. That's my point of view. I have no affiliation with intel whatsoever BTW! Just in case.
 
  • Like
Reactions: RageBone

111alan

Active Member
Mar 11, 2019
290
107
43
Haerbing Institution of Technology
To sum it up, it all started by the strong statement: But on the outside AMD fans are famously notorious, fabricating test data and bending truths with their explanation to their needs, while swarm attacking everyone who have second thoughts. There are also a lot of evidences indicating AMD paying independent reviewers and commenters for its reputation campaign. 111alan has tried his best to tell us that he is right and people who don't agree with him are wrong. One of this proof was to show that Intel C Compiler didn't cripple AMD, where he measures the performance of x265, a tool mostly written in assembly, and claims that Intel was innocent.

Hope this helps those who don't want to read pages.
111alan has tried his best to tell us that he is right and people who don't agree with him are wrong.
Where does this come from.

Just feel that the one thousand question I answered are all in vain.

lol
I have no affiliation with intel whatsoever BTW! Just in case.
Wise, most wise. Just don't join any political affairs.
 

msg7086

Active Member
May 2, 2017
423
148
43
36
I lost interest in the discussion when I saw your statement that your proof shows a part of the truth. "Part of the truth" lol.

You have been pushing strong arguments about AMD and Intel and trying to convince us by wall of "indirect proofs". You were desperately trying to cherry-picking "proofs" that bias to Intel side. Apart from picking x265, a tool that's mostly written in assembly, many other "proofs" are non-senses as well.

For example, "Everyone use only Cinebench [...] not common benchmarks before" I randomly picked a LTT video from 2016 talking about Intel CPUs, and they used Cinebench R15 to measure how fast Intel CPU was. "everyone stress only [...] CS:GO test results [...] to say AMD has better gaming performance" CS has always been a test that Intel performs way better than AMD.

Just by those 2 claims, we would know that (1) LTT was paid by Intel since 2016 to use a non-common benchmark to show how fast Intel CPU was, and (2) Intel chose CS:GO to show how Intel out performs AMD and that is misleading.

If you want to cherry-pick proofs to prove your statement, everyone can cherry-pick the same proofs to prove the opposite. Those are typical fanboy behaviors. Of course you said you are not. Yeah right, I can clearly see that.

This post was so seriously derailed since the reply where you put your strong claims. I'd wish that you can keep your indirect proofs at CHH and keep this place clean. If you don't like joining political affairs, don't start one.