AMD making a fool of Threadripper customers - AGAIN?

WANg

Well-Known Member
Jun 10, 2018
1,240
880
113
44
New York, NY
Haha now that you mentioned I almost forgot the rambus days. My dad got a pair of rambus with his Pentium 4 after he upgraded from a Pentium 133. It was a terrible build at the time.
Oh yeah - I only had a single exposure to a pentium 4 Norwood, and I put a stop to it after 5 weeks and traded it in for an AthlonXP.

Although if you go to VOGONS you'll have people hailing the Rambus P4 like it's the best thing since sliced bread. Tell them it has a GeforceFX, an Aureal Vortex on a beige case and you'll hear them collectively swoon.
 

RageBone

Active Member
Jul 11, 2017
584
145
43
For the original enterprise SP3 socket, it theoretically supports all 3 gens of EPYC(But most vendors only let each board support two gens, due to BIOS flash capacity issue)
...
That is sadly true but with one exception, i can now say that the ASRACK RomeD8 2T does work with all 3 generations.
Not on the same bios of course and it drops support for Naples in the newest one for Milan, but IT DOES!

But whos fault is that? AMDs? well yes! The OEMs? Oh definitely YES! they want to sell new boards every generation.
And that is not just an issue with AMD hardware, but with Intel too of course!

For the desktop/WS version, let's see, x399, trx40, WRX80 are all especially made not to support the next batch of CPUs(Zen1, Zen2-X, Zen2-WX).
I don't see how that statement is actually true.
Because X399 had two generations of CPUs, TR 1900 and TR2900.
That is the usual two generations of support you see with any intel platform.
Of course i would like to have seen more generations of support for it, but i also can't remember that AMD promised anything in that regard.
Unlike with AM4 where they did and ****ed and bodged that up.

Additionally, TRX40 is technically not that different from x399, the only real difference i know beside pcie gen4 and its requirements, is the Chipset link width change from x4 to x8.
That inherently does not make this platform any less build to support the next generation of CPUs and for me, it actually does the oposite.
Actually WRX80 and TRX40 do support the next generation, it is just that AMD decided to cancel one of those.

Considering that WRX80 is basically a SP3 serverboard with an added Chipset, i 'don't see how that could be built to be obsolete and not support the next generation of CPUs.

And yes, i am sad that AMD canceled what was its name ? Chagall.
Would i have bought one? Likely not.
Maybe in two years time once they got affordable on the used market.

It's just that what customer decides what product being made. Large enterprises have their own complexed and reliable test methologies so they know exactly what AMD's unbalanced designs are good and bad at, thus made them rational and hard to bamboozle.
Bamboozled, yes marketing can be bamboozeling.
Good that companies care about their usecase and look at what fits that best.

1. One of my long time friend turned to work on drivers for AMD platforms, end up worked under an "AMD temporary worker". He asked me to remove a post mentioning him in a topic that's considered “negative” to AMD on Chiphell.
I don't really understand what you mean the way you have written it.
But if a friend where to write something on a forum and he mentions me more or less directly as a source, i'd likely have an issue with that as well.
So all i'm saying is that there doesn't need to be a conspiracy by AMD to cause someone to do that.


Just go to Bilibili, open any video concerning CPU posted before the release of intel 12th gen. Just see the sheer amount of "AMD YES" in the barrages and comments, and how many people use overclocked AMD cpu against intel's stock or even downed clock. Sometimes you will see people secretly down/over clocking memory or bus frequency to make up results.

3. Speaking of reviews, I'm not a believer of north bridge design when it comes gaming, or sayings like "intel performance regress over generations". For this, once, after I saw the reviews, I borrowed cpus and motherboards to test out myself.
No thanks, i will not go to bilibili and i don't care what someone said under some random Video.
Personally, i have not heard about "intel perf regresses" over generations unless you refer to the spectre and meltdown mittigations.


Then there comes the problem. My results met my expectation and almost the same as Techpowerup's(AMD Ryzen 7 5800X Review). But why there were an attack on them and forced them to apologize and explain the situation in an AMD favored way?

Here's one of my graphs, all locked at 4GHz.
2.png
That are some fine charts you made there.
As far as i understand those and the ones techpowerup made, i can't say that those are almost the same.
I can't even really find a distant relationship between them, though that might just be me looking at this at 5 in the morning.
Let me just say that actual benchmarking is surprisingly hard.
And Why are the results the way they are, is always the base question that should be asked and answered.
With that in mind, i noticed some questionable things in the Techpoweup charts.
In Regards to your charts the only two questions i have is why you capped the clocks to 4Ghz and why you chose those games specifically and to represent what?

But on the outside AMD fans are famously notorious, fabricating test data and bending truths with their explanation to their needs, while swarm attacking everyone who have second thoughts. There are also a lot of evidences indicating AMD paying independent reviewers and commenters for its reputation campaign. If you've read "the crowd", you should already know why they did all of this, and why some people just buy it.
Oh **** fans of any kind.
Fandom for big tech companies is a fricken bad idea.
But everything you said in that Quote specifically, i think you can swap out for Intel and especially Nvidia and it would ring just as true.
Though, you awake the impression of someone being disinclined towards AMD, in me.
Maybe to a degree i would expect from a Fan of the other companies.

Just miss the time when flagship MSDT/HEDT CPUs only cost 400/1000$.
well, i would agree if i could actually miss those times.
Though i guess everything is better then what we have right now in the GPU market.
 

111alan

Active Member
Mar 11, 2019
215
54
28
Haerbing Institution of Technology
Was that why Intel used MS Office for benchmarking?

The AMD YES comments appeared way before 12gen was released. It probably started when first gen Ryzen was released. Fun story, at that time I thought the product would be a huge success so I convinced one of my co-workers to buy its stock and he did. It was about $10 a share when he bought it.

I upgraded from i7-4770 to 3600. The old 4770 platform was problematic all the time. It had 2 out of 4 memory slots dead. I got a new board shipped to me and it behaved the same. It's either a bad CPU or a bad motherboard batch (Asrock if you are interested). The 3600 platform is giving me the most value, and in a few years I'll be upgrading it to a 5900. It was a shame that i7-4770 had no upgrade path at all.

Both companies made crap products in the past. I pick Athlon64 when Intel made the hot pot P4. I went Intel core series when AMD got me the low efficiency construction machines. Now I'm going back Ryzen because what Intel was doing all these days?

It's interesting to see propaganda from BOTH sides, but at the end of the day the majority would buy those that maximize the value.

Now back to the topic.

It's sad to see AMD not releasing new TR platforms, presumably because Intel stopped competing at this market as well. As people always say, thanks to AMD we got better Intel products, and thanks to Intel we got better AMD products. When competition stops, this is what we get.

Not just AMD but also Intel. Still remember the days where E3-1230 can be had for a reasonable price?
Every company has its propaganda system. Every chip company have highs and lows back then. The problem is,
Intel did not buy people to force feed people fabricated data and info.
Intel did not use massive organized internet violence to suppress freedom of speech.

Plus they led almost all the price uplifts for CPUs and Motherboards in recent years. If this continues we may see a desktop CPU pricing at the level of luxious stuffs son. Good thing this probably will not happen. Competition is good, only when they compete with legit methods. It's not good when people just get manipulated into different political groups and constantly framing eachother and make up stuffs. I appreciate that they didnt change socket unnecessarily on MSDT platform though, just hope intel's LGA1700, as well as all other platforms in the future, last as long. I hate keep swapping boards and heatsinks too.

Just say AMD was way more competent and technologically superior back then, at K7-K8 era. But here is the power consumption graph by multiplier, while doing render tests(Cinebench for the ease of data collection, Vray and actual vray-in-3dsmax show similar results). Power usage is in proportion to frequency and voltage squared. Intel chose to go as high clock as they can in the expense of power at high end in recent years, because they think thermals are managable for enthusiasts, not because they can't make high efficiency, low frequency products. Seems counter intuitive? Well you can always test it yourself.
And Intel's perf iso frequency isn't worse, sometimes much better, as shown in the previous post and some other test data.
3.png

For the office tests, as a person who deal with complexed excel tables regularly, I can actually feel the difference, just need a way to quantify it. And isn't office important for a lot of people?

By the way if you want something like E3 and E3v2 in the past, look at LGA3647 platform.
 

111alan

Active Member
Mar 11, 2019
215
54
28
Haerbing Institution of Technology
I don't see how that statement is actually true.
Because X399 had two generations of CPUs, TR 1900 and TR2900.
That is the usual two generations of support you see with any intel platform.
TR1900 and 2900 are both Zen1, it's like 7900X and 7980XE. Of course they should be compatible. AMD's idea was much more promising back then. I do not think they can't let Zen2 run at 4x on X399, neither do I think they can't lock the pci-e lanes to 3.0 if PCB isn't good enough to run 4.0, as what supermicro did on H11SSL.

Intel allows one chipset to support at least two generations though.


As far as i understand those and the ones techpowerup made, i can't say that those are almost the same.
Zen2 only wins when it's in-cache benchmarks, Zen3 wins at productivity against skylake but still loses at gaming. Some test subjects are different but you get the idea.

I don't really understand what you mean the way you have written it.
But if a friend where to write something on a forum and he mentions me more or less directly as a source, i'd likely have an issue with that as well.
So all i'm saying is that there doesn't need to be a conspiracy by AMD to cause someone to do that.
It's a speech-controlling process I directly experienced. There are a lot more mentioned afterwards.

No thanks, i will not go to bilibili and i don't care what someone said under some random Video.
Personally, i have not heard about "intel perf regresses" over generations unless you refer to the spectre and meltdown mittigations.
No you don't need to if you don't want, but Chinese don't have a choice. Things like Google and Youtube are blocked.

A lot of people insist 11th gen core is much worse than 10th gen, and there are a lot of tests that's clearly fake back then.

Fandom for big tech companies is a fricken bad idea.
Can't agree more. Back then I disliked Samsung SSDs because they paid everyone to promote boost-performance benchmarks like ASSSD above everything else while faking their drives as enterprise level. I posted some detailed tests to show people what is right and what is to be concerned. Now I turnd to like Samsung because they don't do this anymore, instead they make really competent drives now.

I always liked companies which promote innovations, rationality and actual real-world performance. I always hate companies which promote symbolism, fake truth, spread corruption and incite fandom to suppress freedom of speech. I'll probably still hold this same thought in the future.
 

Wasmachineman_NL

Dell Precisions FTW!
Aug 7, 2019
1,385
449
83
A lot of people insist 11th gen core is much worse than 10th gen, and there are a lot of tests that's clearly fake back then.
yeah but rocket lake is a ****ing joke of a CPU, Buildzoid even goes as far as saying "just get a 10900K for gaming"
 

RolloZ170

Well-Known Member
Apr 24, 2016
2,234
555
113
55
yes, very first SP3 EPYC boards should have had PCIE4(PCIE5,DDR5) even if that don't exists to support at least 3 gens of processors.
 

WANg

Well-Known Member
Jun 10, 2018
1,240
880
113
44
New York, NY
Every company has its propaganda system. Every chip company have highs and lows back then. The problem is,
Intel did not buy people to force feed people fabricated data and info.
Intel did not use massive organized internet violence to suppress freedom of speech.

Plus they led almost all the price uplifts for CPUs and Motherboards in recent years. If this continues we may see a desktop CPU pricing at the level of luxious stuffs son. Good thing this probably will not happen. Competition is good, only when they compete with legit methods. It's not good when people just get manipulated into different political groups and constantly framing eachother and make up stuffs. I appreciate that they didnt change socket unnecessarily on MSDT platform though, just hope intel's LGA1700, as well as all other platforms in the future, last as long. I hate keep swapping boards and heatsinks too.
Eh, were you not aware of Intel's long and illustrious history of manipulating benchmarks (either by outright dominating makers of benchmark suites like BAPCO, or making sure that the apps being used incorporate shared libraries specifically made to optimize codepaths for Intel CPUs and slow down competitors), or the fact that the US Federal Trade Commission sued Intel in 2009 for anticompetitive practices, like kickbacks to manufacturers so they appear to be profitable...when they probably weren't. Crap like that are well known to us here in the west.

This kind of behavior still happens, as seen in that Threadripper example, and as @Patrick pointed out in the GROMAC example back in 2019 (granted, some of the test behavior were mentioned later in an article update...the fact that Intel pitted a 400w TDP part only available from them versus a 225w mid-range Epyc seems fairly pointless to begin with). So no, when you have a marketing department that's the size of Intel, you can pretend that dirty underhanded tactics don't exist...or have your CEO tell people that benchmarks don't matter when the numbers makes you look bad.
 

111alan

Active Member
Mar 11, 2019
215
54
28
Haerbing Institution of Technology
I
Eh, were you not aware of Intel's long and illustrious history of manipulating benchmarks (either by outright dominating makers of benchmark suites like BAPCO, or making sure that the apps being used incorporate shared libraries specifically made to optimize codepaths for Intel CPUs and slow down competitors), or the fact that the US Federal Trade Commission sued Intel in 2009 for anticompetitive practices, like kickbacks to manufacturers so they appear to be profitable...when they probably weren't. Crap like that are well known to us here in the west.

This kind of behavior still happens, as seen in that Threadripper example, and as @Patrick pointed out in the GROMAC example back in 2019 (granted, some of the test behavior were mentioned later in an article update...the fact that Intel pitted a 400w TDP part only available from them versus a 225w mid-range Epyc seems fairly pointless to begin with). So no, when you have a marketing department that's the size of Intel, you can pretend that dirty underhanded tactics don't exist...or have your CEO tell people that benchmarks don't matter when the numbers makes you look bad.
I'm well awared the fact that intel did try to intentionally undermine the competition by manipulating software, going as far as reading the namestring of CPU and decide the code used by whether it's"Geniune Intel" or "Authentic AMD". This happeded mainly during 90s. And they also promoted a lot about Superpi, which is a single thread only FP benchmark that has little to do with real use cases. That's why I'm also not a fan of them. In fact I'm not a fan of any vendor now, Nvidia and their CUDA do the same things too.

BUT:
Did they launch massive waves of personal attack against anyone who has a different opinion? No. Did they frame others of what they haven't done? No. Did they hire people for a campaign of reputation, to make people believe in some ridiculous viewpoints? No(but they're trying that now as far as I know).

There needs to be a buttom line here. How could people be rational, if the talking about knowledge and truth can be severely punished?
“If sharp criticism disappears completely, mild criticism will become harsh. If mild criticism is not allowed, silence will be considered ill-intended. If silence is no longer allowed, not praising hard enough is a crime. If only one voice is allowed to exist, then the only voice that exists is a lie.”
Speaking of ecology, when people have their own eco system, they will always try to capitalize on it.That's why they build their ecology in the first place. I'm not saying they can do just anything, but they have the right not to maximize their effort on optimizing the opponent. Actually I think intel isn't as bad as people say in this area. During the making of my X265 benchmark, I tried compiling the x265 with both Intel ICC compiler and MSVC and ran both of them on a 3700X. There aren't any intentional slowdowns as far as I can see. Same goes to intel's MKL(used in CFD) and Embree(used in Cinebench), both of which are AMD favored application. I don't think it's valid to accuse intel of this kind of anticompetitive behavior anymore.
3700x-ICC-MSVC.JPG
This kind of behavior still happens, as seen in that Threadripper example, and as @Patrick pointed out in the GROMAC example back in 2019 (granted, some of the test behavior were mentioned later in an article update...the fact that Intel pitted a 400w TDP part only available from them versus a 225w mid-range Epyc seems fairly pointless to begin with). So no, when you have a marketing department that's the size of Intel, you can pretend that dirty underhanded tactics don't exist...or have your CEO tell people that benchmarks don't matter when the numbers makes you look bad.
Should you just take every comparison in a PPT as is? If so, almost everything in AMD's launch PPT are either false and unreplicable or severely sided, especially their gaming tests, which shows Zen2 even on par with 9900K in gaming? As well as Nvidia's. At least Intel told you exactly the test conditions and environment in which the results are highly replicatable, I don't think any other vendors did any better.

The sidedness of benchmark is possibly why they say "benchmark don't matter". This word is aiming at noobs who may just look at a sided benchmark and resort in a quick sided conclusion. For example, the workload of Cinebench is vastly different from any real rendering scenario, but most people don't know that. I think this saying is to urge people to pay more attention to the real usage.

(Then people interpret this as "there's no way to measure CPU perf", what can I say, their CEO was actually trying to fool people with a 5-year-old's ideology?)

BTW 7742 should be the second flagship at that time, only lower than 7H12, not a mid range thing. And not all benchmarks run at full TDP.
 
Last edited:

RageBone

Active Member
Jul 11, 2017
584
145
43
TR1900 and 2900 are both Zen1, it's like 7900X and 7980XE.
The 7900X came out Q2 2017 and the 7980XE in Q3 2017.
The TR 1950X in August 2017 and the 2990x in August 2018.

Additionally, TR1900 was Zen, TR2900 Zen +.
So there is a clear change and improvement in addition to the 2990x that was 32 core.

Unlike with the i9 9980XE that was just cheaper and soldered.

Intel allows one chipset to support at least two generations though.
Exactly, but just two generations.
Even if its is possible to run them for longer, see lga 1151 that could be moded.

Zen2 only wins when it's in-cache benchmarks, Zen3 wins at productivity against skylake but still loses at gaming. Some test subjects are different but you get the idea.
According to your chart, what you say true.
But you chose the settings and benchmarks that are shown.
I find that to be exactly what you blame AMD for doing.

It's a speech-controlling process I directly experienced. There are a lot more mentioned afterwards.
You make that out to be one, but i don't believe you.

Just say AMD was way more competent and technologically superior back then, at K7-K8 era. But here is the power consumption graph by multiplier, while doing render tests(Cinebench for the ease of data collection, Vray and actual vray-in-3dsmax show similar results). Power usage is in proportion to frequency and voltage squared. Intel chose to go as high clock as they can in the expense of power at high end in recent years, because they think thermals are managable for enthusiasts, not because they can't make high efficiency, low frequency products. Seems counter intuitive? Well you can always test it yourself.
And Intel's perf iso frequency isn't worse, sometimes much better, as shown in the previous post and some other test data.
3.png
Great, another weird chart.
Where are the x and y axis labels for Watts and Multiplier?
Why not have it start at like a multi of 10 or 15 and stop at 50?

While we are at that.
What is wrong with the i7s that they don't reach 50?
I'm aware that its boost clocks but those cpus should have still boosted that high.

Additionally, you clearly have an issue benchmarking the AMD systems.
I'm not saying that its actually your fault, but the dots on both Intel systems make sense.
No weird jumps, or gaps.
On the 3700x, there are 7 larger jumps alone.
Why is that?
I claim that it is your job to figure that out, simply by the virtue of creating those charts.

I'd also not claim to know what they think if i were you.
They are a company / corp after all.
 

111alan

Active Member
Mar 11, 2019
215
54
28
Haerbing Institution of Technology
The 7900X came out Q2 2017 and the 7980XE in Q3 2017.
The TR 1950X in August 2017 and the 2990x in August 2018.

Additionally, TR1900 was Zen, TR2900 Zen +.
So there is a clear change and improvement in addition to the 2990x that was 32 core.

Unlike with the i9 9980XE that was just cheaper and soldered.


Exactly, but just two generations.
Even if its is possible to run them for longer, see lga 1151 that could be moded.


According to your chart, what you say true.
But you chose the settings and benchmarks that are shown.
I find that to be exactly what you blame AMD for doing.



You make that out to be one, but i don't believe you.



Great, another weird chart.
Where are the x and y axis labels for Watts and Multiplier?
Why not have it start at like a multi of 10 or 15 and stop at 50?

While we are at that.
What is wrong with the i7s that they don't reach 50?
I'm aware that its boost clocks but those cpus should have still boosted that high.

Additionally, you clearly have an issue benchmarking the AMD systems.
I'm not saying that its actually your fault, but the dots on both Intel systems make sense.
No weird jumps, or gaps.
On the 3700x, there are 7 larger jumps alone.
Why is that?
I claim that it is your job to figure that out, simply by the virtue of creating those charts.

I'd also not claim to know what they think if i were you.
They are a company / corp after all.
Additionally, TR1900 was Zen, TR2900 Zen +.
So there is a clear change and improvement in addition to the 2990x that was 32 core.

Unlike with the i9 9980XE that was just cheaper and soldered.
The change is minor enough that AMD doesn't even bother to update its server lineup.

Then what about 10980xe. Zen+'s improvement isn't even as big as SKX to CLX, which is focused more on feature than performance and power.

But you chose the settings and benchmarks that are shown.
I find that to be exactly what you blame AMD for doing.
Please, write a full-blown review in the comment section yourself.
I find you trying to find problem from some non-existence.

short version: Every game was tested with maximized render distance and effects, 1080P or 2K resolution.

Why not have it start at like a multi of 10 or 15 and stop at 50?
Short version: because I like it that way. Doesn't affect anything anyway.
Long version: Some CPUs can't go below 16X manually. And this test is all within stock frequency range. You can't just overclock and give a random voltage to pollute the results.

No weird jumps, or gaps.
On the 3700x, there are 7 larger jumps alone.
This has something to do with both AMD's manufacturing node and its LDO power design. At certain frequency the stock p-state voltage will have several sharp drops, and after that it no longer go down with frequency. LDO can't have a voltage output as low as FIVR or other switch power supply.

Why? I can see the rough point, and the fact that AMD's P-state voltage curve isn't as carefully designed, AND also that this 3700x can't run at 38x with the 1.112v down the ladder and still be stable enough to pass Prime95, and can't go beyond 44x without getting a bsod when testing.

Why they only have 4-5 ladders of voltage across the entire frequency range, while intel has designated voltage for each multiplier? You have to ask them yourself. BUT, I tested on several motherboards, they all behave like this. Everyone can test it too, it's very simple, just a little time consuming.

Besides, it doesn't change who wins, even before the jump.

You make that out to be one, but i don't believe you.
No, you don't have to. Some people still believe earth is flat nowadays, and I don't care. There's always a way to find problem just by the look of things, and it's especially easy when people don't try to understand it.

But, if you want anyone else to believe in you, better not assume that what other people didn't post here was all intentionally misleading, and get your own test results to prove the point.

And in fact I thought something was wrong, too, before I tested on multiple motherboards and drew the entire voltage curve myself. Just dig deeper, there's a lot more going on.
4.png5.png
(The unit of first chart is V, second chart is mv)

I find that to be exactly what you blame AMD for doing.
Every one of my test results welcomes verification. I will not attack people who actually has legit studies, I accept them. I will not frame people prooflessly with things that they didn't do.

But most importantly, I talk about tests and facts, while they assume sides and intentions.

Edit: Think I need to upload this, in case some people don't know.
6.png
 
Last edited:

RageBone

Active Member
Jul 11, 2017
584
145
43
The change is minor enough that AMD doesn't even bother to update its server lineup.
Well, i'm not that sure myself but multiple people i trust on such topics said that Naples is Zen+ already anyway.
Even if, what and where is the issue?

Then what about 10980xe.
First, good whataboutism.
Second, thank you for the one example where intel released "3" "Generations" of CPUs for a platform instead of 2.
But i write that in quotation-marks because as you said yourself here:

Zen+'s improvement isn't even as big as SKX to CLX, which is focused more on feature than performance and power.
The 9980XE was a 7980XE with better TIM and half the price.
The 10980XE surprisingly is cascade lake, so it is at least two architecture generations on one platform.


I find you trying to find problem from some non-existence.
That sentence hardly makes sense to me.

And the only issue i start to have is that this discussion with you is basically Off Topic.

I will not attack people who actually has legit studies, I accept them. I will not frame people prooflessly with things that they didn't do.
Studies are done by asking and answering questions and then working through criticism as well as to make a final point that can be agreed upon.
Once there is no criticism left, all questions are answered and the point is clearly made and agreed upon, that then is a legit study.

I will not attack people who actually has legit studies, I accept them.
Even worse then being a Fanboy is to just simply accept things.
So to me, your study will not be legit until i have no questions as well as criticism left and agree on the Point made.
Then i will accept it.
Until then, we are simply having a discussion with opinions.

Onto more concrete and hopefully constructive criticism.

Those are two very nice charts.
But, the Units belong on the respective axis.
Not a Headline, or the text above the chart.

Simply because it is otherwise ambiguous.
Does Freq-Voltage correlate to X-Y or Y-X?
What specific units are used?
if i were to assume it is Volts like with the ryzen chart, the intel would run at hundreds and thousands of Volt.
And on the Frequency side, i know that you mean the Multipliers, but that could as well be 50Hz as the max freq.

So please add the used Units on the corresponding axis.


Previous Question Quoted: Why not have it start at like a multi of 10 or 15 and stop at 50?

Short version: because I like it that way. Doesn't affect anything anyway.
Long version: Some CPUs can't go below 16X manually. And this test is all within stock frequency range.
Because you like it that way is a completely fair reason to do it that way.
I think it actually matters though!
Because such changes would improve the readability of the chart by spreading out the values over more space.
One can argue that any white space without Data on it, is wasted space.
But i also agree that one can use the same to make something less obvious so it really depends on when and how you present data.

Previous Question Quoted: Why not have it start at like a multi of 10 or 15 and stop at 50?

Short version: because I like it that way. Doesn't affect anything anyway.
Long version: Some CPUs can't go below 16X manually. And this test is all within stock frequency range.
That is exactly why i have asked or rather proposed those changes since both CPUs can't apparently go below a multiplier of 15.
Since it is in the stock frequency range, a multiplier of 60 is a bit high as well.
Especially since both did not reach even 50.

That is one of the Questions i asked as well.
I expected the i7 to make 49 and 50 no problem considering its boosts are at 51 and 50 for T3.0 and 2.0.
Considering that the Ryzen 5800x has a official max boost of 47, and the charts look like it ran at maybe even 48, its hard to see exactly, that means the Ryzen is obviously boosting at 100% but the i7 isn't.

How would that be a fair comparison?
Why isn't the i7 boosting as well?


And here it would have been nice to specify that the i7s all core boost is 47.
 

111alan

Active Member
Mar 11, 2019
215
54
28
Haerbing Institution of Technology
Well, i'm not that sure myself but multiple people i trust on such topics said that Naples is Zen+ already anyway.
Even if, what and where is the issue?



First, good whataboutism.
Second, thank you for the one example where intel released "3" "Generations" of CPUs for a platform instead of 2.
But i write that in quotation-marks because as you said yourself here:



The 9980XE was a 7980XE with better TIM and half the price.
The 10980XE surprisingly is cascade lake, so it is at least two architecture generations on one platform.




That sentence hardly makes sense to me.

And the only issue i start to have is that this discussion with you is basically Off Topic.



Studies are done by asking and answering questions and then working through criticism as well as to make a final point that can be agreed upon.
Once there is no criticism left, all questions are answered and the point is clearly made and agreed upon, that then is a legit study.



Even worse then being a Fanboy is to just simply accept things.
So to me, your study will not be legit until i have no questions as well as criticism left and agree on the Point made.
Then i will accept it.
Until then, we are simply having a discussion with opinions.

Onto more concrete and hopefully constructive criticism.



Those are two very nice charts.
But, the Units belong on the respective axis.
Not a Headline, or the text above the chart.

Simply because it is otherwise ambiguous.
Does Freq-Voltage correlate to X-Y or Y-X?
What specific units are used?
if i were to assume it is Volts like with the ryzen chart, the intel would run at hundreds and thousands of Volt.
And on the Frequency side, i know that you mean the Multipliers, but that could as well be 50Hz as the max freq.

So please add the used Units on the corresponding axis.




Because you like it that way is a completely fair reason to do it that way.
I think it actually matters though!
Because such changes would improve the readability of the chart by spreading out the values over more space.
One can argue that any white space without Data on it, is wasted space.
But i also agree that one can use the same to make something less obvious so it really depends on when and how you present data.



That is exactly why i have asked or rather proposed those changes since both CPUs can't apparently go below a multiplier of 15.
Since it is in the stock frequency range, a multiplier of 60 is a bit high as well.
Especially since both did not reach even 50.

That is one of the Questions i asked as well.
I expected the i7 to make 49 and 50 no problem considering its boosts are at 51 and 50 for T3.0 and 2.0.
Considering that the Ryzen 5800x has a official max boost of 47, and the charts look like it ran at maybe even 48, its hard to see exactly, that means the Ryzen is obviously boosting at 100% but the i7 isn't.

How would that be a fair comparison?
Why isn't the i7 boosting as well?


And here it would have been nice to specify that the i7s all core boost is 47.
But, the Units belong on the respective axis.
Not a Headline, or the text above the chart.
These are not the release version of the chart, at least not when without proper explanation. Whether should I write a review is uncertain.

X-axis is multiplier. The BCLK is always 100MHz. The Y-axis is voltage, as claimed under the charts, the unit first one is V and for the second one it's mV. Sorry but I didn't test both of them in the same day so the units are different, but I don't think there is a confusion here.

For the post before, Y-axis is Package Power(W), just to mention. Every platform was veryfied with a tong-type ammeter.

I expected the i7 to make 49 and 50 no problem considering its boosts are at 51 and 50 for T3.0 and 2.0.
Considering that the Ryzen 5800x has a official max boost of 47, and the charts look like it ran at maybe even 48, its hard to see exactly, that means the Ryzen is obviously boosting at 100% but the i7 isn't.
All charts end at 47X, except 3700x which can't be stable at that freq.

These are the all core boost by default. More than that will make the processors enter per-core boost state, like P0-1c, P0-2c, etc. Since not all cores are made equal(especially when TB3.0 or TVB is involved) and CPUs aren't designed that way afterall, I don't think forcing that to the all-core turbo can produce reliable results that's completely non-reliable to motherboard interferences.

There's one extra dot for 11700K tested, just to make things look better. Perhaps I should give it a different color.

One can argue that any white space without Data on it, is wasted space.
Originally I want to explain that P=C*F*V^2, the reason why high frequency CPUs have lower performance per watt due to higher voltage needed, and calculate the rough power Uncore used.

Aditionally, I didn't do frequencies below 16x even when possible, because of two things. One, that data will have nothing to compare with. Two, it may take another one or two hours to add those data for each capable CPU, without any real meaning. The benchmark will slow down when each multiplier is reduced, and it must be ran multiple times to ensure nothing weird was going on.

And two things I don't agree with:
The 9980XE was a 7980XE with better TIM and half the price.
9980XE is of the same price with 7980XE, at $1979. The price drop happend at 10980XE. It has to be done, Intel had 3175X and AMD had 3990X at that time, and a 18-core CPU is no longer the top class overall any longer.

Studies are done by asking and answering questions and then working through criticism as well as to make a final point that can be agreed upon.
Once there is no criticism left, all questions are answered and the point is clearly made and agreed upon, that then is a legit study.
Agree but, I don't think framing people of cheating, lying, slandering, or attacking people's personality, without any propper proof or understanding, are legit ways of "critism".
 
Last edited:

RageBone

Active Member
Jul 11, 2017
584
145
43
9980XE is of the same price with 7980XE, at $1979. The price drop happend at 10980XE.
right.

framing people of cheating, lying, slandering, or attacking people's personality, without any propper proof or understanding, are legit ways of "critism".
You are kidding me, right ?

According to your chart, what you say is true.
But you chose the settings and benchmarks that are shown.
I find that to be exactly what you blame AMD for doing.
That is what i said and the only thing i can think of that you might interpret that way.
Maybe you are blaming AMD for more things then i thought?

Maybe murder and child kidnapping as well?
Everything to make Intel look bad, right ?
 

111alan

Active Member
Mar 11, 2019
215
54
28
Haerbing Institution of Technology
right.



You are kidding me, right ?



That is what i said and the only thing i can think of that you might interpret that way.
Maybe you are blaming AMD for more things then i thought?

Maybe murder and child kidnapping as well?
Everything to make Intel look bad, right ?
No.

Everything I blame AMD for, I have proof. Listed in the first post.

Every one of my friend who tried to post anything negative about AMD, while being honest and scientific, suffered organized attacks. That includes me too.

These attackers fake things to prove some ridiculous, fabricated thoughts, or simply never cared about any theory or truth proposed, and attack people themselves, framing them as "intel payed" without any proof at all.

That's why I never chose to write the review originally due within 2020, although I have a lot of test data prepared, and went as far as posting a preview on CHH. Then I realize how far these people would go to defend AMD's lies, as Der8auer and Stilt both had, and how bad the time was for any valuable analysis.


For intel, who cares. They got no bussiness with me, perhaps that's their advantage to me. They don't have that many crazies to wipe their butt, perhaps this is why they look bad, and good at the same time .

But when I say something negative about intel, for example, I never liked their SSDs after they started to use L06B chips, they don't frame me of being an AMD or Samsung payed commenter. Not even the deepest fan tried when I present the reason.

People are selfish. I want the PC/server hardware community to be rational, so I could comfortably study how hardware works, and communicate freely without constantly being reminded of the “political correctness” out of nowhere. Perhaps this coincides with most people here after all.


Edit: Oh by the way some false "speciallist" went as far as saying the on-die Mesh is much slower than inter-die IF, with the testing result of an extremely non- universal, self-written, close-sourced benchmark. That Zen2 review should still be there on CHH. Perhaps only these "experts", who take others as fools, could survive.

Claiming someone being like this kind of people, is enough of an insult.
 
Last edited:

ReturnedSword

Active Member
Jun 15, 2018
526
226
43
Santa Monica, CA
Is there any proof that these Chiphell forum posters are paid AMD operatives? I highly doubt that. Seems like a typical case of fanboy-ism to me. Fanboys have existed in every subject for as long as the internet has existed. Who cares? Let’s just pick the platform that works best for our needs at the time of build, and move on.
 
  • Like
Reactions: RageBone

Wasmachineman_NL

Dell Precisions FTW!
Aug 7, 2019
1,385
449
83
Oh yeah - I only had a single exposure to a pentium 4 Norwood, and I put a stop to it after 5 weeks and traded it in for an AthlonXP.

Although if you go to VOGONS you'll have people hailing the Rambus P4 like it's the best thing since sliced bread. Tell them it has a GeforceFX, an Aureal Vortex on a beige case and you'll hear them collectively swoon.
jfc don't get me started on VOGONS, I could write a entire goddamn paper on why that forum is absolute dogshit.

RAMBUS makes sense for bandwidth starved processors like the Pentium III though.
 

RageBone

Active Member
Jul 11, 2017
584
145
43
[...] communicate freely without constantly being reminded of the “political correctness” out of nowhere.
Please when and where did someone here do that?
How?

Are you per chance not reading this in english?
Maybe through a translation app?
Because i have no clue how else you could come up with such nonsense
 
  • Like
Reactions: ReturnedSword