Dell PowerEdge C6145 x2 + Poweredge C4130? Good combo?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Storm-Chaser

Twin Turbo
Apr 16, 2020
151
25
28
Upstate NY
You bought some 10 year old servers?

As you also only bought a single drive for each C6145, I don't think you realize that it's two nodes per chassis, and as such, you will need two.

I have made no allusions as to what is optimal. The OP made one goal very clear (in case you missed it), which was performance. It appears to me that they did not care for my advice as it did not provide them with the validation they were looking for.

Performance metrics are not my opinion. They are facts that cannot be argued. Every statement I have made in regards to them can be validated. I encourage you to refute any of them with evidence.
Your advice was to return the servers, lol. Big head maybe?

Please stop posting here. You just continue to muddy the waters with your arrogant and one dimensional doublespeak.

You don't think I know the C6145 has two nodes?? LOL do you really think I only have six SSDs for this project?

No, the truth is I have about 10 SSDs that will also be used in conjunction with this project. We get that you don't like it - you don't like the hardware. Now please go find something that strikes your interest. And don't let the door hit you on your way out. You continuing to post here after saying your peace is making you look foolish.
 

BlueFox

Legendary Member Spam Hunter Extraordinaire
Oct 26, 2015
2,090
1,507
113
Please, you're the one took immediate offense when soliciting opinions and continue to think that I have a personal vendetta against the hardware that you hold to some godlike status. I furnished an opinion and presented evidence to support it. You didn't care for it since it didn't validate your position, but have not made efforts at a rebuttal against the points I made. It has been unsurprising that responses from others have been fairly limited given your hostility.
 

Storm-Chaser

Twin Turbo
Apr 16, 2020
151
25
28
Upstate NY
@BlueFox

Let me break this down for you. Perhaps this will help you understand my motives and methods. I think you just have a major disconnect and you are not understanding the underlying principles at play here from a computer enthusiasts/hardware collector POV. With that in mind, it is obviously very difficult for your to admit that relative performance is not measured by your standards. You obviously invalided the entire project because it doesn't measure up to YOUR standard.

Guess what? ITS NOT "YOUR" standard. It's a garbage view that you likely will never escape from. We know you have nothing positive to add here, and all you've done is be critical of the project. People want to see this dream manifest, and it will, regardless of the naysayers, and regardless of what people like you think of it. At the end of the day, you are the one missing out. Let that sink in for a few minutes. Reflect on it, and perhaps you will begin to see the light...

Before you say how horrible the performance is going to be, please keep in mind that is not the chief reason for this build. That should be pretty obvious. We aren't measuring this projects performance relative to new hardware metrics. This is a sentimental build, because I am using the Opteron 6180 SE, AMDs 12 core "best in class" CPU based on the K10 architecture. As you may or may not be aware, I ran a Phenom II X6 chip for many years, much longer than most PC enthusiasts, and it was my primary rig up until about a year ago. That being said, the Phenom II left quite an impression on me and quickly became my favorite CPU of all time. Partially due to its rock solid performance, snappy responsiveness and low latency, but also because the platform was pretty well flushed out at that point, had DDR3 1T 1600 MHz support, and it was the only chip I've ever had that allowed me to unlock additional cores and take advantage of "hidden power". It was also AMDs first chip to see substantial increases in memory throughput (and performance improvements in level 3 cache as well), from overclocking the memory controller (CPU NB). All these attributes, intricate details and fine tuning ability have put the Phenom II very close to my heart.

And look, please take this into account, when I say this is a sentimental build that does not mean that we wont or cannot have an emphasis on performance. No, in truth it is actually quite the opposite. Some people *cough* *cough* like to make assumptions and measure other people's hardware projects by current tech performance metrics and standards and hold whatever they are working on "hostage" to the false comparison against recent / newly released tech and then take that as the gold standard. NO. The benchmark standard here is tech from the same class / era / epoch. We are competing against LIKE HARDWARE. That is my primary and ONLY metric in determining overall performance. We will not hold this technology hostage to a false standard, and I want to emphasize that point so you know my rational behind creating this beast. And that's exactly what you are doing. You are holding my hardware hostage to your very flawed definition of relative performance and your very flawed perception that all hardware must be compared to brand new, recent tech and if it doesn't stand up, its junk... and you refuse to see the project under the correct light. Just want to point those things out, as you seem to have a major and persistent and a stubborn disconnect here.

Well, I've had my fun with the consumer grade chip. It is now time to shift gears and re-visit the K10 architecture in a server environment. Like a magnet, that quest lead me to the 6180 SE, AMD's 12 core K10 variant, a best in class processor.
 
Apr 9, 2020
57
10
8
Next step is going to be tackling the issue of RAM (ideally I want to populate all 64 slots for max memory bandwidth in both C5145 Servers. Do you think 1TB of RAM is going to be enough? :)
As much fun as it might be, you're going to find that maxing out the RAM will be very expensive. I don't know what those servers take, but 16gb sticks from that era are going to have held their value. 16gb sticks of reg ecc seem to still go for around $40 a pop. I'm all in favor of going for the max, but do you really want to spend $2600 on RAM alone?

4gb sticks are a lot more common and go for far less a gig; 256gb is still more than most people will ever see(i doubt I even own that much and I literally have a box of dimms). I also have some trouble believing you'll see *that* much of a performance increase by maxing out the memory. Generally servers with that kind of capacity are rarely ever configured with it because the operating system and applications have to be fine-tuned for it to be worthwhile. The OS will certainly be able to "see" all of it, but in terms of application you're probably past the point of diminishing returns.

Exactly what memory does it take, anyway?

Phase after that will be acquiring 4 GPUs to use in conjunction with the C4130 GPU server.
Silly question, and I am asking this for my own edification: what do you actually do with a GPU server besides bitcoin mining? I have a bunch of old GPUs in my inventory that I'd love to do something with. They're fairly old so I'd need to find something fun that doesn't rely on a lot of efficiency. A GPU folding rig might be a good use, but again I don't really know myself.

So as you can see, we are coming at this with a high performance emphasis at every angle. But some would say it's not enough, probably thinks he's a server god or something. Wouldn't be the first time I've seen people take their own words as "gospel" LOL.
You mentioned in your original post having only standard gigabit ethernet. You probably won't *need* anything faster, but who among us does? Might be fun to build a fiberoptic or infiniband backbone to get 10 or even 40 gb/s transmission speeds. Again if you're not moving around large volumes of data it may not help, but it could be a fun project. Honestly I'm wanting to do the same, though I actually sorta have an application for it(if ya squint).
 
  • Like
Reactions: Storm-Chaser
Apr 9, 2020
57
10
8
@Storm-Chaser regarding your pictures... yeah, that looks exactly like my place, except you have more branded equipment than I do. Love those old Dell Precision workstations, thing is a monster but still works. My environment is a bit tidier at the moment but I do love my equipment.
 
  • Like
Reactions: Storm-Chaser

Storm-Chaser

Twin Turbo
Apr 16, 2020
151
25
28
Upstate NY
First off I want to thank you for the questions. They are on point. I am glad you are taking this seriously. The respective answers to those questions will definitely help me formulate the optimal (or most ideal) memory configuration possible, when it's all said and done. Remember, I am just learning the ins and outs of this server. This data and "intel" relative to the C6145 is just as new to me as it is to you.

As much fun as it might be, you're going to find that maxing out the RAM will be very expensive. I don't know what those servers take, but 16gb sticks from that era are going to have held their value. 16gb sticks of reg ecc seem to still go for around $40 a pop. I'm all in favor of going for the max, but do you really want to spend $2600 on RAM alone?
Yup, I'm just dreaming on 1TB of RAM thing, that would be ludicrous and definitely break the bank. Plus there wouldn't be much of a benefit unless you had something purpose built to utilize that insane amount of memory.

Exactly what memory does it take, anyway?



Here is the run down on memory specs. Ill just tell you now it does seem a little convoluted at first blush. I've included snips straight from the service manual that do a pretty good job explaining specific memory sets / memory configurations and their respective performance, that work with the PowerEdge C6145. According to my calculations its RDIMM memory that I want to target.



The PowerEdge C6145 has 32 memory sockets split into 4 sets of 8 sockets with one set of memory sockets per processor. Each 8-socket set is organized into 4 channels, with 2 memory sockets per channel.

This visual representation shows the physical layout of memory slots on the motherboard. Populating ALL 32 slots with the appropriate memory modules will give me an effective 16 channels of memory bandwidth. Yes, I will be running 16 channel memory in this beast.

And keep in mind, that's 16 channels per node --- now multiply that by 4 to get your data on total memory throughput across the entire implementation, spanning my entire server landscape. It's actually off the chain and off the charts. Never in my wildest dreams did I envision I would be working with hardware that brings that level of memory performance to the table. Simply put, its astronomical.

Board layout for context:

Bottom line:
I should look into RDIMM and more specifically, RDIMM in dual rank form. I am going to go ahead and trust Dell when it comes to determining the optimal memory configuration with my given scenario. According to the performance data my choice is clear:



(In my interpretation) The 1.5v Dual rank 1600MHz RDIMMs are the modules I should use to assure maximum memory bandwidth and throughput. I intend on using the fastest memory supported by the PowerEdge C6145 server.

Silly question, and I am asking this for my own edification: what do you actually do with a GPU server besides bitcoin mining? I have a bunch of old GPUs in my inventory that I'd love to do something with. They're fairly old so I'd need to find something fun that doesn't rely on a lot of efficiency. A GPU folding rig might be a good use, but again I don't really know myself.
Most likely folding or GPGPU tasks. I will be putting it to good use. And we will get there, make no mistake. But no definite plan at the moment. And that's part of the reason Im here. I want to get feedback on this. Nothing is set in stone and I welcome any recommendations, short of returning the servers. :)

I will put forward some Dell "propaganda" on use case:

Enable extreme I/O flexibility for workload optimization with up to two PCIe 3.0 expansion slots and an optional 96-lane PCIe 3.0 switch that allows accelerators to be pooled across processors. Support for InfiniBand® EDR and NVIDIA® GPUDIRECT™ lets you tailor data throughput and reduce latency, while support for InfiniBand EDR protects your IT investment and reduces TCO. Uniquely balanced CPU/GPU configurations for a range of demanding workloads. Up to four 300W PCIe accelerators per 1U server.

The PowerEdge C4130 provides supercomputing agility and performance in an ultra-dense platform purpose-built for high- performance computing (HPC) and virtual desktop infrastructure (VDI) workloads.

@Storm-Chaser regarding your pictures... yeah, that looks exactly like my place, except you have more branded equipment than I do. Love those old Dell Precision workstations, thing is a monster but still works. My environment is a bit tidier at the moment but I do love my equipment.
Yeah I just so happened to be taking inventory that day, so I had all my hardware out, normally the work environment is much more tidy. The Dell Precision T7500 you see under the ping pong table is a beast as well. It has two X5690s under the hood, 24GB of triple channel DDR3 @ 1333MHz and a Sata 6GB/s solid state drive. Currently down at the moment, working on the cooling system as we are really close to the temp limit when I push it hard. TDP of 130W per processor, so I am working on integrating a second fan into airstream to provide direct, active cooling to CPU# 1.
 
Last edited:
Apr 9, 2020
57
10
8
Oh I had a Precision T7500 at work many lifetimes ago. It was old THEN and still a beast THEN. Dual X5690s hold their own even today. I still want one sometimes, but they are huge and heavy. Besides, I mostly build custom systems for myself now. I spent some more time looking at C6145s, and while I agree its an awesome machine, I don't think I'll be shopping for one. Don't really need dual quad-sockets in a 2-U format. However next time I have a little money to throw at this hobby I'm gonna try and get my hands on a 4-U quad socket system just for my own entertainment. That is of course assuming I cant find that 8-way P3 Xeon by then, mwahahahaha.

So regarding your memory: what you want is called "reg ECC RAM" or registered(the R in RDIMM). You can find plenty of it on ebay thanks to technology liquidators, just make sure you get BOTH registered AND ECC, as their exists RAM that is one or the other(my own mistakes on this front have recently screwed me, so I feel like sharing)

This further indicates you *probably* don't want to try to max it, because Reg ECC DDR3 is expensive and still in high demand, especially in 16gb sticks. Just a cursory check around ebay: 16gb stick average $40 each, 8gb stick average $15-20 each, 4gb stick average around $6-8 each with a break if you buy even more. Given how much you're looking for, its possible you can get a seller to cut you a deal. Most of these things are coming from technology liquidators that got them out of eiwaste to begin with, so they've got a pretty high profit margin.

Now I've heard conflicting things on this. Conventional wisdom as I have always heard it says that you need to use the same brand/model/size of stick in every slot to get the best performance and stability. However, I've also been told I am wrong by sources I would consider reliable, and as long as its the same speed and size you'll be fine. You *probably* won't notice a problem if you mix and match brands, but if you can try and get all the same model just to be on the safe side.


Anyway, back to the topic. If you're just doing things like folding, then network performance isn't going to be super important. My understanding of F@H is you download a job, process it, and send it back when its done. Not a whole ton of data flying between network clients. Still might be fun to build a backbone, but not needed.

The GPU server is going to be the most in-demand, apparently GPUs are a lot better at folding tasks than CPUs. Still plenty of intense CPU work to be done. Maybe you could set up separate folding teams for each system, and see which one earns points faster?
 
  • Like
Reactions: Storm-Chaser

Storm-Chaser

Twin Turbo
Apr 16, 2020
151
25
28
Upstate NY
Oh I had a Precision T7500 at work many lifetimes ago. It was old THEN and still a beast THEN. Dual X5690s hold their own even today. I still want one sometimes, but they are huge and heavy. Besides, I mostly build custom systems for myself now. I spent some more time looking at C6145s, and while I agree its an awesome machine, I don't think I'll be shopping for one. Don't really need dual quad-sockets in a 2-U format. However next time I have a little money to throw at this hobby I'm gonna try and get my hands on a 4-U quad socket system just for my own entertainment. That is of course assuming I cant find that 8-way P3 Xeon by then, mwahahahaha.
Yeah I get that. One of the driving forces for me behind this build was the connection to my favorite CPU, the Phenom II X6. Had there been no connection I probably wouldn't have given this server a second look. Still though, I really consider the Phenom II platform to be AMD's first real high-performance platform, the first platform that I would get behind, in theory. Everything before this was SLOW and non responsive as far as I am concerned. Of course, maybe 192 cores of my favorite CPU is a little over the top. But YOLO.
 

Storm-Chaser

Twin Turbo
Apr 16, 2020
151
25
28
Upstate NY
You bought some 10 year old servers?

As you also only bought a single drive for each C6145, I don't think you realize that it's two nodes per chassis, and as such, you will need two.

I have made no allusions as to what is optimal. The OP made one goal very clear (in case you missed it), which was performance. It appears to me that they did not care for my advice as it did not provide them with the validation they were looking for.

Performance metrics are not my opinion. .
I really couldn't let this one go unanswered due to your gross miscalculations regarding performance.
If you truly believe I am building a low performance system here, then its obvious you know absolutely nothing about Dell's C line of high performance servers.

I take it from your point of view the Dell PowerEdge C6145 is a low performance server relegated to menial tasks? And the two SSD raid 0 is the slowest route to take? That's what you are telling me...

Is their C line of servers generally known to be at the bottom of the barrel? LOL

16 channels of 1600MHz DDR3 memory? Capable of 1TB of RAM? These options and attributes seem weak in your opinion? 16 memory channels. Hmmm Yeah, low bandwidth there. Wont perform well.

And the dual PowerEdge C4130? Another low grade server from Dell? With two SSDs in raid 0 for the OS? That would slow me down a lot, right?



The C4130 wouldn't happen to be the faster, more efficient and more powerful next generation replacement for the c410x, right?

And what of the CPUs in the C4130x? Must be the case that two Xeon 12C/28T e5 2676s v3s are near the bottom of the totem poll, stuck down there with the single core Celerons? Right? Heck, I have a 486dx that would blow the doors of it.

That brings us to the Opteron 6180 SE. Again, a very very low performing CPU. It was never best in class performance was it?

If I remember correctly its at the bottom of the list of CPUs in this family. Oh wait, I almost forgot, It's opposite day! (for the record, this guy is knocking the best in class CPU from that era.). Equivalent to someone bashing the 3950X and calling it slow.

Apparently, he knows nothing about AMDs flagship K10 CPU.

Here is a reality check for you:



Dell Intros Hyperscale-Inspired Server for High-Performance Computing -- Campus Technology

And I will post this here for all to see. And I encourage you to read the whole article, fox. Especially someone like you with such a lack of understanding regarding Dells C line, it would be super beneficial.

And on to storage? How about enterprise level HDDs for a total capacity of 12TB for my NAS?

And we will set up all servers boot partitions with two SSD drives in a RAID0 for tremendously low throughput. OS partitions on 1TB SSD? Much slower than a 5400 rpm laptop drive, right?

And that's not even mentioning storage per individual server. Ultimately each server will have over 5TB of high speed drives

And what of the HP DL360p? We don't want to leave him out in the fray? It only has 64GB of ram. So in that case I think I can get away with running a maximum of one virtual machine, with very limited performance capabilities, right?

Matter of fact, Im thinking of bringing that Celeron single core back to life and scrap all of this. Gets tiring moving heavy computers, and if a Celeron can do the job, might as well use that instead.

The results speak for themselves. ...
In recent benchmarks, the PowerEdge C6145 ranked as the highest performing x86 2U shared infrastructure server on the market based on SPECfp_rate2006 results. Acosta said in a comparison with the HP ProLiant DL980, Dell's PowerEdge C6145 performed 21 percent faster, and did it in one-fourth the space and at one-fifth the cost.
 

BlueFox

Legendary Member Spam Hunter Extraordinaire
Oct 26, 2015
2,090
1,507
113
Are you going to keep trolling with 10 year old marketing materials or finally post some benchmarks? Any common one will do with actual numbers and not some relative comparison to equally ancient hardware. You could download PassMark, CineBench, etc and run it yourself. That will settle everything, won't it? What's the holdup there?

Since you wish for me to keep indulging you, a high-end modern single CPU system does have more memory bandwidth than 16 channels at 1600mhz (which drops to either 800/1066/1333 with large DIMMs depending on the ranks and voltage). Not to mention 4TB of RAM per CPU and not being slowed down by NUMA across 4 sockets.

A pair of RAID0 SATA SSDs is nothing special either? A 2TB M.2 SSD in a laptop will get 5 times the throughput. I've not mentioned storage at all, so, I'm not sure why you decided to bring it up.

Need I go on? I've owned multiple quad G34 servers (plus 940 and 1207), various generations of Intel's quad socket systems, and Dell C6100 servers. Please be sure to educate me further on them...
 

Storm-Chaser

Twin Turbo
Apr 16, 2020
151
25
28
Upstate NY
You do realize I am going to blow every single baseless supposition you make out of the water, right? You seem to have great difficulty in comparing my hardware to other hardware in the same class? Is that because you know it offered best in class performance and it would totally contradict your attitude?

Only God knows why you are comparing dissimilar hardware from recent tech / consumer market (non enterprise level) and insisting that I should take you seriously. You are under the false delusion that any and all hardware must stand up to the current performance metrics of recent tech. This is a very ignorant view, I might add.

Are you hesitant to compare hardware in the same class because you know it blows every single comparable server of that timeframe out of the water? And does it in a much smaller, more condensed enclosure?

If you don't think I'm serious about performance, think again. And I will make the statement right now that there are absolutely ZERO workstations or consumer grade rigs that will even come close to the amount of effective memory bandwidth I am about to bring to the table.

To illustrate the point, lets take a look at my Z820s memory bandwidth specs. I chose this because the hardware specs are quite similar (about year apart, both DDR3 with similar stock memory performance) so we can get a rough estimate on the theoretical memory performance of my cluster as a whole.



So lets just take one category for simplicity sake. This is 8 channel memory. So we should be able to conclude that since each C6145 node has 16 channels, we can effectively double the read result on this Z820 to get a ballpark read on performance for one C6145 server.

So that would be:
107GB x 2: 214GB/s (all sixteen channels)

So we can round effective read speed of a C6145 down to around 200GB/s

Since we have essentially 4 servers in 16 channel mode (two nodes per server) we can multiply this number by 4 to come up with a

relative, theoretical performance bandwidth result for the cluster as a whole.

That number comes out to 800. That's 800GB/s effective read performance from my cluster. What desktop consumer grade hardware are you going find next to peddle your narrative that these systems are low performance? Keep in mind, this is an effective 64 channels of memory. Feel free to do the math yourself, but I'm afraid you will be let down.
Here are the specs

Dual CPU HP Z820 Workstation
-Two Xeon E5 2696 v2 Processors, 3.5GHz turbo
-24 cores / 48 threads
-LGA 2011
-L1 Cache: 1.5 MB
-L2 Cache: 6.0 MB
-L3 Cache: 60.0 MB


64GB Hynix DDR3 (16 x 4GB) PC3-14900 DDR3-1866MHz Registered ECC DIMMs
-Locked in 8 Memory channels for maximum bandwidth
-Bi-Directional Differential Data Strobe
-VDDQ = 1.5V (1.425V to 1.575V)
-Supports ECC error correction and detection
-On-Die Termination (ODT)


(120mm x 2) Factory Hewlett Packard Liquid Cooling Solution
-Each CPU has an individual 120mm liquid cooling system and radiator
-Redundant systems in place to maintain airflow around CPUs in the event of a fan failure
-Rated for up to 150W TDP, and well beyond
-Temps rarely if ever go above 150*F

1450W Power Supply
-90% Efficient
-80 PLUS GOLD compliant
-Wide-Ranging, Active PFC
-Energy Star qualified
-Easily handles two 300W TDP GPUs

MSI Radeon RX 5700 XT GAMING X 8GB GPU
-256-Bit GDDR6 14Gbps
-2560 Stream Processors
-Core Clock: 1730MHz
-Boost Clock: 1980MHz
-MSI's aftermarket cooling system for this GPU is easily one of the best designs / solutions on the market. Pretty much second to none.
-Matter of fact, I can peg the GPU at 100% and still not hear it
-A good GPU is a cool running GPU


The z820 has been approved for a video card plus coprocessor card(s) as a factory configuration ---- see the avid or Telsa configurations for details on these setups, they can put out a lot of heat and yet the system will still run within the factory thermal envelope without any issues. The HP z820 has one of the best OEM cooling systems ever created. It's still in a class of it's own as far as I am concerned.
 

BlueFox

Legendary Member Spam Hunter Extraordinaire
Oct 26, 2015
2,090
1,507
113
Again, you're wrong. Your Opterons support 1333mhz max and will drop to 1066mhz or lower with large DIMMs, so you're looking at 42.7GB/s per CPU (reference) or 171GB/s max per node. AMD Epyc is 205GB/s bandwidth per CPU (example). So again, a single CPU is better than a node. Would you like me to show you what this looks like when you have multiple CPUs in a chassis?

I see you have yet to post any CPU benchmarks however. I look forward to seeing some PassMark or CineBench results. Please do not disappoint everyone by not posting those. I'm puzzled why you have yet to do so when they are so critical to your argument.
 

Storm-Chaser

Twin Turbo
Apr 16, 2020
151
25
28
Upstate NY
Again, you're wrong. Your Opterons support 1333mhz max and will drop to 1066mhz or lower with large DIMMs, so you're looking at 42.7GB/s per CPU (reference) or 171GB/s max per node. AMD Epyc is 205GB/s bandwidth per CPU (example). So again, a single CPU is better than a node. Would you like me to show you what this looks like when you have multiple CPUs in a chassis?

I see you have yet to post any CPU benchmarks however. I look forward to seeing some PassMark or CineBench results. Please do not disappoint everyone by not posting those. I'm puzzled why you have yet to do so when they are so critical to your argument.
The servers aren't hear yet, genius.

But I'm glad you said that on the memory spec. I might have gone ahead with 128 1600MHz modules, but now I can get my ducks in a row before I pull the trigger. I assumed since it was a later model K10 it would have support for DDR3 1600MHz memory. Also note the 6180SE appears to support a maximum of 12 DIMMS per socket. We have only 8 DIMM slots per socket, so we should be fine and be able to capitalize on all 16 channels of memory across all four nodes. My estimate was actually not that far off... that's still a tremendous amount of throughput, and I haven't even brought the GPU server into the fray! I'm sure you'll find something to knock about on that as well....

I noticed you have not posted any benchmarks either, and the onus is on you because you are the one making ludicrous claims that this will be a low performance setup. Next thing your probably going to tell me is your laptop from 2004 has more memory bandwidth than my entire cluster combined.

Or, perhaps you'd like to compare my cluster to an original 16 bit Intel 8086 processor in terms of compute power? And I don't mean the K version.

5 SSDs and two 1100W PSUS arrived today...

 

Storm-Chaser

Twin Turbo
Apr 16, 2020
151
25
28
Upstate NY
I presume you will be excluding all benchmarks that present your CPUs unfavourably then?

Quad Opteron 6180s: https://www.spec.org/cpu2006/results/res2011q2/cpu2006-20110411-15600.pdf
A few desktop CPUs: https://images.anandtech.com/graphs/graph14605/111159.png

None of these even rely on newer instructions like AVX-512, where performance goes up exponentially.
Did I have quad Opterons or do I have 8?

For one, that benchmarked IBM server is a 2U enclosure housing 4 Opteron CPUs.
Dell's C6145 doubles that processing power within the same confines.

Also note, the very benchmark you are quoting is the same benchmark with which the C6145 broke the record.

These should keep you busy for a while.







Dell Targets HPC With PowerEdge C6145 Server
 

BlueFox

Legendary Member Spam Hunter Extraordinaire
Oct 26, 2015
2,090
1,507
113
You have quad Opterons. You in fact have 4 such server. Your nodes just happen to share power supplies and a drive backplane within a chassis, but are otherwise completely independent. The fact that the IBM server is also 2U, but only consist of a single node is irrelevant as since processing power is segregated at that level, there is little point in measuring things any other way since you're going to be running 4 independent OSs across what you have. CPU performance does not really vary from system to system.

I have not said that the system did not perform admirably a decade ago. I have stated that it is slow compared to today's desktop CPUs, which it is. On most of the test I showed, a 9900K is faster than two nodes. Holding the record 10 whole years ago means nothing today.

433.mlc - 9900K is 165% the speed of 4 x Opteron 6180 SE
444.namd - 9900K is 354% the speed of 4 x Opteron 6180 SE
450.soplex - 9900K is 524% the speed of 4 x Opteron 6180 SE
453.povray - 9900K is 429% the speed of 4 x Opteron 6180 SE

Need I go on? Do you not understand how quickly CPUs go obsolete? You seem to be purposefully dense and unwilling to grasp this or are just trolling. None of these tests even use modern instructions, where your Opterons are going to get blown away even further.