Dell PowerEdge C6145 x2 + Poweredge C4130? Good combo?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Storm-Chaser

Twin Turbo
Apr 16, 2020
151
25
28
Upstate NY
No
You have quad Opterons. You in fact have 4 such server. Your nodes just happen to share power supplies and a drive backplane within a chassis, but are otherwise completely independent. The fact that the IBM server is also 2U, but only consist of a single node is irrelevant as since processing power is segregated at that level, there is little point in measuring things any other way since you're going to be running 4 independent OSs across what you have. CPU performance does not really vary from system to system.

I have not said that the system did not perform admirably a decade ago. I have stated that it is slow compared to today's desktop CPUs, which it is. On most of the test I showed, a 9900K is faster than two nodes. Holding the record 10 whole years ago means nothing today.

433.mlc - 9900K is 165% the speed of 4 x Opteron 6180 SE
444.namd - 9900K is 354% the speed of 4 x Opteron 6180 SE
450.soplex - 9900K is 524% the speed of 4 x Opteron 6180 SE
453.povray - 9900K is 429% the speed of 4 x Opteron 6180 SE

Need I go on? Do you not understand how quickly CPUs go obsolete? You seem to be purposefully dense and unwilling to grasp this or are just trolling. None of these tests even use modern instructions, where your Opterons are going to get blown away even further.
No, you don't need to go on. I think there was just a misinterpretation as to what exactly the other was thinking. I made the assumption you blanketly considered the C6145 as not an elite level server (regardless of timeframe) and I think you made the assumption that I still thought the hardware in a C6145 from 2010 is viable in today's high performance marketplace where CPU performance tends to jump exponentially with every release.

Generally speaking, I would consider the C6145 obsolete in nearly every regard. Hence, I have to bring more cores to the table to do the same amount of work as a newer CPU. And all the other stops will need to be pulled out as well if I want to keep the hardware relevant.

When I say the C6145 is a high performance server I MUST draw this conclusion from comparisons to like hardware only.

Like age groups in a 5K run. You'd never pit a 70 year old against a 24 year old and expect to draw any real performance conclusions for that.

But, we can aim for best time in that age group. The only real true test of "relative" performance.

When I build machines from the old days that seems like the most fulfilling method to gauge system performance.

Any other comparison would be indirect.
 

Storm-Chaser

Twin Turbo
Apr 16, 2020
151
25
28
Upstate NY
Lots of solid state drives for the new servers.



Harvesting memory from a HP ProLiant DL360p for use in the new servers.
Not 100% if the memory will work but its worth a shot. Spec wise it's ideal, dual rank, 1333MHz, ECC registered. I have enough of this memory to fully populate one node on the C6145. (32 x4GB)… for testing.

If it works then I think that's the best way to go about populating all 128 memory sockets across both servers. I can pick these servers up for less than $200, and each one comes with 64GB of 1333 RDIMMs (16 x4GB). Granted, this is OEM HP memory made by Micron, but I think it will work all the same since the actual specs are nearly identical to Dell memory. Another added benefit is I can repurpose the 2-8 15K SAS that usually come with the proliant Gen8 servers for use with the C6145 main storage. And purchasing the memory individually is much more expensive than paying for the complete server.

This is the memory I am hoping to use.


3TB Enterprise level hard drives for my NAS.


Harvesting memory from the DL360p server:


New monitor stand






 
Last edited:
  • Love
Reactions: mateojra

Patriot

Moderator
Apr 18, 2011
1,451
792
113
This thread...



The 6180 is a wonderful chip, the last chip to compete with Intel till Naples. I have several holding down my paper stacks.
I understand sentimentality, I have a 4p Magnycours in a closet yet, beautiful work of art and a beast of its era, it took bricktown launching to take my world records.
I benchmarked it when Naples was launched, sighed and put it away. A 4p g34 will compete with a 4p 2011 v1, a 2p v2 will beat it.
A 1p naples will beat it, not beat it, run circles and laugh at it.

Power costs for AMD g34 700w per 4p node and half that on the Intel side. I had mine overclocked to 3.8ghz so they drew... a bit more.
Each chip is dual dual channel being MCM and if you ran the memory benchmark you might be a bit disappointed.

Have fun with it, play with it, view it as a learning experience on how to manage servers and clusters... but don't ask for performance advice from experts and scoff at them. We will help you tune this but after your first months power bill you may find you want to trade up a bit. I think it costs me $75/mo for 1 node at 6c/kwh .... you have 4 nodes.

I am very very good at tuning G34 boxes, I'll help but this is not reddit homelab, most here are experts in related fields and should be treated as such.
 

Storm-Chaser

Twin Turbo
Apr 16, 2020
151
25
28
Upstate NY
Fully understood - thank you. And I want to be clear That's what this project is all about: learning, as well as the sentiment and partiality I have to the Phenom II CPU. I'm not out looking to break any records, however, I do intent to fully optimize the hardware of this cluster for maximum throughput. I used to be an IT support tech but dealt mainly with desktops / printers so this is all very interesting stuff to me - and something I've wanted to do for quite a while.

Things are starting to come together. The Dell C4130 GPU server is here. I held off on purchasing the GPUs because I wanted to take some measurements of the C4130 chassis before pulling the trigger on suitable GPUs. If anyone has advice implementing and designing a cluster with the C4130 I'd love to get some feedback before I buy the GPUs.

*There is also a possibility that I may only be able to run two GPUs given the fact I am using 110v instead of 220. Need more investigation into the C4130s power delivery system to get my ducks in a row.

I already performed a CPU upgrade on the GPU server. From the factory, it had

2 E5 2620 V3 Xeon chips, six cores, 2.4GHz base, 3.2GHz boost.


I chose my upgrade path with two E5 2678 V3 12 core Xeons, stock 2.5GHz and 3.3GHz boost.

This CPU is identical in every respect to the E5 2680 v3 except it supports both DDR3 and DDR4 memory. I paid $100 a piece for the upgraded CPUs. Ideal value, or so it seems, in the E5 V3 family.

It has 16GB of RAM (DDR4 2133) and while I did get some 1.8" SSD adapters, not sure if that is going to work in terms of main storage.













More pictures to follow.
 

Storm-Chaser

Twin Turbo
Apr 16, 2020
151
25
28
Upstate NY
Have fun with it, play with it, view it as a learning experience on how to manage servers and clusters... but don't ask for performance advice from experts and scoff at them. We will help you tune this but after your first months power bill you may find you want to trade up a bit. I think it costs me $75/mo for 1 node at 6c/kwh .... you have 4 nodes.

I am very very good at tuning G34 boxes, I'll help but this is not reddit homelab, most here are experts in related fields and should be treated as such.
I look forward to working with you on this! Still waiting on both C6145s. I'm hoping they show up by Friday.

The reason I took the hardline approach is because I made it quite clear in the first post that the hardware had already been selected and purchased and I just needed some guidance from the experts here on making it all work together most effectively.

I knew the shortcomings of the 6180 SE right from the gate. What was frustrating to me is that this guy continued to question the hardware and continued pushing the notion that it certainly was not high performance equipment/hardware, and he insisted on these points even after knowing I was doing it for sentimental reasons.

Also I find it a little disingenuous in deeming this equipment as lacking performance / or not living up to the industry standard of high performance.

For one, The twelve core 6180 SE was AMD's most powerful processor to date. If you want to relegate this hardware to the bottom shelf, you need to give me a clear reason as to why you think I should take you seriously when you are making the ludicrous claim that AMD's flagship, best performing and most powerful CPU fails to meet the industry standard of "high performance". Because at the time, it was smoking the Intel chips and set numerous benchmark records left and right. So if the 6180 SE should not be considered high performance, than no other server of that era could be considered high performance either.

So I hope you can understand why I didn't just accept his blanket statement outright, as truth. Not to mention the fact that all of these C series PowerEdge servers are aimed squarely at HPC environments, where scalability and compute power and density are needed most. Except when I would bring this up he wrote it off as "dell propaganda".

Which is odd, because when a manufacturer and vendor like Dell promotes a certain server for a certain environment, they typically don't try to pull the wool over everyone's eyes, they typically do intend for that server to be used in that specific environment. Purpose built. For example, Dell is not in the business of lying to the public or their clients about the usage case of their equipment to earn a few extra dollars on the take, only to lose massively down the road once customers found out the server could not be used for their needs.

The reality of the situation is that the C6145 was the culmination of Dell listening to its people, the culmination of client feedback and pushing the performance envelope to get more compute power in a smaller form factor. You don't hear a lot about power to weight ratio in technology , but in the computer industry we could make the statement that the Dell C6145 stood alone in terms of horse power per cubic inch. The server was a beast and should be described accurately, that's why I took offense to his comments.

But I want to be clear, I am here to learn from you guys and I am very excited about this project. I will be sure to keep you informed and up to date so as to my progress.
 
Last edited:

Storm-Chaser

Twin Turbo
Apr 16, 2020
151
25
28
Upstate NY
Where are my servers? Arrgg. Very slow coming from Seattle area, which is not surprising.

I guess the waiting is not so bad when you consider the circumstances. I can start to frame out a rough idea for my cluster in conjunction with the C4130 and second HP ProLiant DL360p (with a third on the way). This is the server with 64GB of ram, and I can harvest some of the memory for this other build if necessary. The processors in this server have also been upgraded to something much more powerful. I cant remember exactly what I put in there at that moment but it's definitely some badass xeon silicon near the top of the list.

I pulled the trigger on four GPUs, so my mind is made up. Ultimately, it was the Tesla K80 dual processor GPU that won my heart over. For a number of reasons, but mainly because it has two graphics processors, so it fits in very nicely with the theme of this build.

The Tesla K80 was a professional graphics card by NVIDIA, launched in November 2014. Built on the 28 nm process, and based on the GK210 graphics processor, in its GK210-885-A1 variant, the card supports DirectX 12. The GK210 graphics processor is a large chip with a die area of 561 mm² and 7,100 million transistors. Tesla K80 combines two graphics processors to increase performance. It features 2496 shading units, 208 texture mapping units, and 48 ROPs, per GPU. NVIDIA has paired 24 GB GDDR5 memory with the Tesla K80, which are connected using a 384-bit memory interface per GPU (each GPU manages 12,288 MB). The GPU is operating at a frequency of 562 MHz, which can be boosted up to 824 MHz, memory is running at 1253 MHz.
Being a dual-slot card, the NVIDIA Tesla K80 draws power from 1x 8-pin power connector, with power draw rated at 300 W maximum. This device has no display connectivity, as it is not designed to have monitors connected to it. Tesla K80 is connected to the rest of the system using a PCI-Express 3.0 x16 interface. The card measures 267 mm in length, and features a dual-slot cooling solution.
 

Storm-Chaser

Twin Turbo
Apr 16, 2020
151
25
28
Upstate NY
The first C6145 arrived today. Ping pong table is a good staging area for this project. On the far side of the table you will see the parts servers I will be using, as needed, to fit out and complete this project. I have an HP ProLiant DL360 gen 6 over there, as well as a Dell Precision workstation T7500, to harvest RAM from as needed. The T7500 is interesting to me because it contains twelve 2GB modules of dual rank 1333 ECC / registered DDR3 memory, the ideal spec to populate all 128 memory sockets across four C6145 nodes. However, according to Dell documentation, the C6145 only supports 4GB modules and goes up from there. So I will be testing that later today, hoping that works because the 2GB modules are very cheap and it wouldn't break the bank to populate all memory sockets. This would allow for sixteen channel memory(per node). Here are some other memory modules I have around to also test with the new C6145 servers.

Yes, I went a little overkill with the pictures, testing out my new camera...






After harvesting all SAS drives from old servers, I've ended up with a grand total of ten 300GB 10000RPM HDDs.
I will use two of these mechanical HDDs per C6145 node, likely in RAID 0 configuration for a total of 600GB per node (also note, OS boot drive on all of these servers is going to be SSD single or two in raid 0). Please note this cluster will have six servers total when complete, as listed:
*Two Dell PowerEdge C6145s (since each server has two nodes, this topology works out to a total of four independent quad CPU servers / nodes
*One Dell PowerEdge C4130 GPU server with four Nvidia Tesla K80 GPUs. Since the K80 actually houses two independent GPU processors on the same board, each C6145 node will actually get a full 24GB of GDDR5 GPU memory and two Tesla GPU cores. Effectively, that's two Tesla graphics accelerators per node.
*One HP ProLiant DL360p 1U server (64GB RAM, SSD boot drive and upgraded CPUs

A total of 10 SAS drives for dedicated storage (roughly two per server)



I also have four 15000 RPM 146GB SAS drives


You can see the two C6145 nodes here


Front of the C6145





1.8" usata SSD bays in the Dell C4130 GPU server


Other memory to test if the 2GB modules don't work:


HP ProLiant DL360p to be used in this cluster


Parts servers for this project:



Parts servers on the near side of the ping pong table


HP microserver Gen10


Parts inventory
 
Last edited:

Storm-Chaser

Twin Turbo
Apr 16, 2020
151
25
28
Upstate NY
Servers on the near side of the ping pong table will form the cluster, far side, old servers for parts...


GPU server (fits up to 4 300W TDP Nvidia Tesla GPUs)


HP Microserver Gen10 (provisions for up to four HDDs for network storage (upgraded to 16GB RAM)


3TB Enterprise level hard drives (for NAS)


Additional 1333MHz memory to test



GPU server


Other memory to test out with the C6145


All of these parts will be installed across the six servers in this cluster


HP ProLiant on the right, GPU server in center and C6145 on the far left


ten of these 10K rpm 300GB HDDs


CPU upgrade:


 
Last edited:

Storm-Chaser

Twin Turbo
Apr 16, 2020
151
25
28
Upstate NY
So one of the new Dell C6145 servers has the wrong processors in it, and doesn't POST. Apparent damage in shipping, but the other node appears to be fine.

The seller and I agreed that instead of the huge shipping cost to send it back to him, he will simply refund me the money for one,, so that works out to $145 for both C6145s.

The second one actually has the later bulldozer 6276 16 core CPUs under the hood, but I wanted specifically the 140W 6180SE.

This means I need to purchase one other server to complete the cluster. Preferably, something newer with DDR4 and dual CPUs. I need something on the same level as my Dell C4130 to take full advantage of the GPGPU crunching power
 

NachoCDN

Active Member
Apr 18, 2016
111
91
28
53
But I want to be clear, I am here to learn from you guys and I am very excited about this project. I will be sure to keep you informed and up to date so as to my progress.
i got into this field because of my love for learning. seeing those images you posted definitely demonstrates your love for learning as well! at the end of the day, i love finding out what's possible and see the number of cores in task manager go up!
 
  • Like
Reactions: Storm-Chaser

edge

Active Member
Apr 22, 2013
203
71
28
I succeeded in this field because I could make a good estimate of where it will be five years in the future, not ten years in the past. I don't understand what there is of current monetary value to be learned in storm-chasers' endeavor. That said, I do understand the nostalgia of memories on the bleeding edge and the satisfaction of recreating it.
 

Storm-Chaser

Twin Turbo
Apr 16, 2020
151
25
28
Upstate NY
I succeeded in this field because I could make a good estimate of where it will be five years in the future, not ten years in the past. I don't understand what there is of current monetary value to be learned in storm-chasers' endeavor. That said, I do understand the nostalgia of memories on the bleeding edge and the satisfaction of recreating it.
I'm a hardware enthusiast with a collection of rare, unique or significant computers. This server/cluster fits in as part of that passion. The build has absolutely nothing to do with practical computing. So while I did get a pretty good deal on the servers, money is no object and just wanted to point out, $$$ should not be a determining factor in critiquing this build.

I take it you are not familiar with the term "Retro Computing"? I will list a brief summary below

This is a hobby build, and with that I will improve the servers, spec them and fit them to my liking. Since there is no risk of losing valuable data, I should be able to run all my OS SSD drives in raid 0.

Retrocomputing (wiki) is the use of older computer hardware and software in modern times. Retrocomputing is usually classed as a hobby and recreation rather than a practical application of technology; enthusiasts often collect rare and valuable hardware and software for sentimental reasons. However, some do make use of it.
 

edge

Active Member
Apr 22, 2013
203
71
28
Did you miss my last sentence? I did note the satisfaction of recreation.
 

Storm-Chaser

Twin Turbo
Apr 16, 2020
151
25
28
Upstate NY
So, we are getting down to brass tacks.
Servers to be used in this cluster:

1) Dell C6145 with 48 Opteron 6180 SE cores, populating all RAM sockets with 1333MHz ECC to take advantage of all sixteen memory channels.
2) Dell C4130 GPU server with four Nvidia Tesla K80 GPUs
3) HP ProLiant DL360p gen8 server
4) The last server.... with be a Dell precision R7910 with lighting quick SSD drives.
 

jSON_BB

New Member
Jul 21, 2020
2
0
1
If anyone here can direct me to resetting the remote login credentials, would be appreciated. Any possibility it would be the same as the unit you recieved?