Dell PowerEdge C6145 x2 + Poweredge C4130? Good combo?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Storm-Chaser

Twin Turbo
Apr 16, 2020
151
25
28
Upstate NY
So I just pulled the trigger on two Dell PowerEdge C6145 servers. These were so cheap I could not pass them up. It worked out to about $600 with shipping for both of them. Besides, they have my favorite CPU underhood, namely, the AMD Opteron 6180 SE. The reason I say favorite is because I stuck with my consumer grade Phenom II x6 until about a year ago and it was hands down my favorite CPU over the years. So almost a decade on that platform and it never missed a beat. Matter of fact, the processors still have serious crunching power and only thing really lacking are modern instruction sets. So when I saw these servers for $145 each, each outfitted with 8 of the fastest K10 processors ever made, I had to buy them. You might be surprised to learn that the hexacore 1100T Phenom II still goes for well over $200 on eBay so it's rare to find a 10 year old processor that still holds its value like that. So the value is there. And God knows I would never favor the Opteron 6200 series even if I could get more cores in the same space with that family. I think we all know why. Nobody in their right mind would ever put an FX chip into a server if they cared one iota about performance and IPC. The FX is a horrifically underperforming CPU that is only good if you want to see the novel 5.0GHz in the lower right hand corner of your desktop. lol

In any event I have been deep in thought as to how I am going to put these servers to good use. And that is why I am here. I would like to get feedback on how to best set this up and what you guys might think is the best topology given my existing network and hardware. At the moment I don't have any specific form of research ready to be deployed to these servers. They came out of the blue so I've been scrambling trying to build out what I need to make this a successful project. Also note, I would also like to harness the GPU compute power to the best of my ability. More on that below.

Part of that success is going to be maximizing GPU computational power (GPGPU type stuff) across the board… Initially, I had my heart set on the supplemental PowerEdge C410x (external 3U PCIe enclosure for up to 16 GPUs) to work with my new C6145s, but there are bandwidth limitations there and performance bottlenecks so it's not ideal. Not to mention performance per watt is lacking and I would need a 220v electrical system in my house to run the damn thing.... And plus I searched high and low and couldn't find a single one for sale anywhere on the internet.

So my research has brought me to what I think might be some potentially interesting middle ground. And that goes by the name of The Dell PowerEdge C4130. This is different from Dell's previous c410x GPGPU solution in a number of ways. First, it is far more efficient than the C410x, better performance per watt and much better overall computational power because the GPUs are placed internally without the need for an external PCIe ipass cable. This is also a dual socket DDR4 platform and one a 1U C4130 server can house up to 4 GPUs.

I was thinking for now I would start with just one C4130 (can be had for about $850 on amazon) and purchase 4 Nvidia GPUs (about $100 each) for use in the enclosure to get some perspective on where I want to go with this project and how much I want to lean on GPGPU computational power relative to my 16 12 core Opteron 6180 SEs.... test the waters... so to speak.

So Guys. I have a budget of about $3400 left remaining for this build if we take off the initial $600, plus $80 for four new 1100W 110V PSUs to work with my houses' electrical system. That means I would like to gain some measure of advanced performance from all of this new hardware. I guess the crux of the problem is you probably don't know how to help me if I don't know myself exactly what I am going to be doing with the hardware infrastructure that I will be putting into place.

That being said, I am planning to do some deep learning, folding at home or other similar type workloads. I really do want to push this hardware to the absolute limit to maximize overall performance. I don't want it to just sit there and look pretty. So I am all ears in this regard.

To give you a little background, I have been massing servers in the home for quite some time. Perhaps you guys can help me with my network architecture so we have a clean running, efficient and high performance operation here? Please, let me know what you think.

Existing network infrastructure:
1GB ethernet in the house. Decent internet connection, about 10Mgs down and 5 up.

I have one HP DL360p gen 8 1U server that I was thinking could be my domain controller for my little network here. This server has 8 10K SAS drives in a RAID O configuration for maximum performance (I know, I know, I said RAID O, but dont worry, my data is safe). This server has two six core E5 processors at 2.5 GHz and 64GB of 1333MHz ECC registered DIMMS (16 memory modules in total). 2TB total disk space.

Second server is a HP ProLiant ML360 Gen 6 with slightly more dated hardware. Also a dual socket server, with 16GB of DDR3 memory and 4 15K RPM SAS drives. This thing is dated but I'm sure I can still put it to good use in some capacity or another.

I have two HP Z820's, both dual socket LGA 2011 workstations, one has 24 physical cores and 48 threads at 3.5GHz turbo combined with 64GB of octal channel DDR3 1866MHz memory plus 4 SSDs in RAID 0 for maximum throughput. The other z820 has 24GB ram and SSD and I am currently waiting on two 4.0GHz E5 2600 v2 processors from China to finish that build.

I also have a Dell T7500 running Two Xeon X5690 3.4GHz (3.7GHz turbo) CPU and another SSD for the boot drive.

Lastly, I have an HP MicroServer Gen10 enclosure (currently waiting on four 3TB HDDs).. I have this outfitted with an SSD drive and 16GB of DDR4 G.Skill non ECC memory that runs at 2133GHz. This will be my file server of sorts. So we have some potential here, I just need your help to put it all together in the most effective way possible.

I guess what I am saying is that my network is a clean slate. I have just presented some of the hardware that we can potentially onboard to turn this into an interesting and productive challenge. Like I said earlier, my main and only goals here are to maximize and optimize my servers so they are effectively working together to deliver the highest performance possible. So I am ready and willing to accept advice here... on the network, on hardware, on servers, on GPGPU stuff and all related subjects.

And just to fill you in, my new C6145 servers DO NOT come with RAM or HDDs, so I am going to have to build these up. It seems easier sometimes just to buy the cheap HP proliant 360 gen8 servers and harvest the RAM and HDDs that way... because I can get a server for about $300 with 64GB 1333MHz ECC DDR3. Versus a 128GB RAM only kit which will run me around $400... Two birds, one stone, almost.

In a perfect world, I would want to run 1600MHz DDR3 in 4GB modules to capitalize on all the memory bandwidth my new systems have to offer. So that would be 32 slots per server for a total of 64 slots that need to be populated if I want to build this out to the best of my ability.

Given my budget and hardware constraints, I am all ears for any and all recommendations on not only the network, but the server topology as well. I want to build a high performance cluster here without compromise. It's nice to work with this old equipment because it's all so cheap, and I have a little breathing room here with another $3400 left over to complete the project.
 
Last edited:

BlueFox

Legendary Member Spam Hunter Extraordinaire
Oct 26, 2015
2,091
1,507
113
I don't think anyone here is going to recommend a 10 year old 45nm CPU with a 140W TDP that is outperformed by a Core i3 from 3 generations back. A modern single CPU system can outperform all 8 CPUs in that chassis and not draw 1kW+. If you go a few generations back, you can still get CPUs that will best each node, so a dual socket system will still come out ahead of all 8 Opterons. The upfront cost is fairly similar and you won't be spending $100 a month on electricity like you would the C6145 space heater.

I would recommend that you return the Dells if you can and go with newer hardware. Even if you write them off as a loss, the electricity savings from not running them will pay for a replacement fairly quickly.
 

Storm-Chaser

Twin Turbo
Apr 16, 2020
151
25
28
Upstate NY
I don't think anyone here is going to recommend a 10 year old 45nm CPU with a 140W TDP that is outperformed by a Core i3 from 3 generations back. A modern single CPU system can outperform all 8 CPUs in that chassis and not draw 1kW+. If you go a few generations back, you can still get CPUs that will best each node, so a dual socket system will still come out ahead of all 8 Opterons. The upfront cost is fairly similar and you won't be spending $100 a month on electricity like you would the C6145 space heater.

I would recommend that you return the Dells if you can and go with newer hardware. Even if you write them off as a loss, the electricity savings from not running them will pay for a replacement fairly quickly.
Look, I understand the performance characteristics of the Opteron 6100 series platform just as well as you do. You don't like the hardware. It's old---I get it. Might have something to do with sentimental value. Believe it or not people actually build rigs for reasons other than sheer performance metrics relative to current/newer tech in the marketplace. And for the record it will be a total of 16 processors and 192 physical cores. Regardless of how you cut it, or your negative perceptions, a power potential of 20 teraflops is no joke. Makes the 9900K look like the runt of the litter, doesn't it?

Kind of like how people work on old cars? Who in the world would want a 57 Chevy with no air conditioning? And that lethargic V8 under the hood is horrible for power and fuel economy. Who would want to rebuild and restore a turd like that?

Get my drift? Or would you like me to illustrate the point with another parable?
 
Last edited:

BlueFox

Legendary Member Spam Hunter Extraordinaire
Oct 26, 2015
2,091
1,507
113
The 9900K will outperform each node still (with a fraction of the power consumption), so not really. While you may not have liked the performance metrics that I presented, I would not call my perception towards the hardware negative. Said metrics are not my opinion after all.

I don't think anyone here can offer you any suggestions if performance, value, and functionality are not the primary concerns. You'll need to figure out what your priorities are on your own and choose hardware accordingly.

Your car analogy is also a bit unfair. A system of that generation/era is hardly considered a classic. Sentiment would be a little different if you were trying to build the highest end Pentium 1 system for retro gaming for example. I think the Dell would be more analogous to say an early 90s Chevrolet Caprice with its 180hp 5.7L V8, as indeed, who would want to rebuilt and restore that?
 
Last edited:

Storm-Chaser

Twin Turbo
Apr 16, 2020
151
25
28
Upstate NY
The 9900K will outperform each node still (with a fraction of the power consumption), so not really. While you may not have liked the performance metrics that I presented, I would not call my perception towards the hardware negative. Said metrics are not my opinion after all.

I don't think anyone here can offer you any suggestions if performance, value, and functionality are not the primary concerns. You'll need to figure out what your priorities are on your own and choose hardware accordingly.

Your car analogy is also a bit unfair. A system of that generation/era is hardly considered a classic. Sentiment would be a little different if you were trying to build the highest end Pentium 1 system for retro gaming for example. I think the Dell would be more analogous to say an early 90s Chevrolet Caprice with its 180hp 5.7L V8, as indeed, who would want to rebuilt and restore that?
So now you are dictating for me what has sentimental value and what doesn't? LOL that's rich.

Sure, I'd rebuild and restore a early/mid 90s Caprice because it makes for one hell of a sleeper. The 9C1 designation is what you want to go after - the 9C1 "police spec" offers you more HP, better brakes, upgraded suspension, as well as a locking rear diff, to name a few. And even if you are stuck with just a regular cookie-cutter civilian-model Caprice, the classic LT1 swap from a Buick Roadmaster limited or Impala SS is still a very popular upgrade. Add a big old thumpity cam and some headers and you've got a sleeper and a hot rod on your hands plus a substantial amount of horsepower under your right foot. The LT1 was essentially the most powerful V8 engine in it's class, that's why it's been used hundreds of times by Caprice owners looking for maximum performance. *Cough* 6180 SE was best in class *Cough*

So I take it the expression: "beauty is in the eye of the beholder" is lost on you?

I take it the significance of the Opteron 6180 SE CPU is lost on you as well?

And your feedback is fine, I'm just restating my positions as well, so you know where I stand on this. And apparently this person believes a single eight core 9900K chip can outperform 192 K10 cores and 8 GPUs
Must be the hyperthreading, right? That's the 9900K's secret weapon, isn't it? LOL

I'm really trying to give you the benefit of the doubt here, but apparently you cannot see the forest for the trees and so it seems, your calculations are way off the mark.

FLOPS = (sockets) x (cores per socket) x (cycles per second) x (FLOPS per cycle)

The Intel Core i9-7980XE Extreme Edition Processor, for example, runs at about 4.3 GHz (faster if overclocked) and thus should calculate to 1.3 teraflops.

My two PowerEdge C6145 servers plus 4-8 high performance GPUs for GPGPU should get me in the ballpark of 20-25 teraflops. Not sure where the disconnect is here... but we are moving forward with the project regardless of what these naysayers think.

I just want this to act as a sounding board, and a build log of sorts. I will be sure to keep you guys updated as to my progress. Servers should be here by the end of the week and we will jump right in at that point.
 

BlueFox

Legendary Member Spam Hunter Extraordinaire
Oct 26, 2015
2,091
1,507
113
I was not dictating anything for you. I specifically said that you need to figure out your priorities as we cannot do that for you.

There are plenty of benchmarks out there that show performance. You are free to look them up. I stated that the 9900K will outperform a node and used Passmark scores as a reference due to how commonly it is used. As you keep trying to put words in my mouth, you should understand what a node is and that each chassis has two. I've also not mentioned GPUs at all.

If you really want to use FLOPS as a comparison, then you should be aware that thanks to AVX-512, modern CPUs are going to do 8x as many operations per cycle as your ancient Opterons and only widen the performance gap even further.

A 9900K will be similar to a node, Ryzen 3950X a chassis, and the top end Threadripper will be the equivalent of both chassis. All of which are single CPUs.

If you came here for confirmation from a bunch of people and are not open to input, while I can't speak for the rest, I don't think you're going to get what you're after.
 

Storm-Chaser

Twin Turbo
Apr 16, 2020
151
25
28
Upstate NY
I was not dictating anything for you. I specifically said that you need to figure out your priorities as we cannot do that for you.

There are plenty of benchmarks out there that show performance. You are free to look them up. I stated that the 9900K will outperform a node and used Passmark scores as a reference due to how commonly it is used. As you keep trying to put words in my mouth, you should understand what a node is and that each chassis has two. I've also not mentioned GPUs at all.

If you really want to use FLOPS as a comparison, then you should be aware that thanks to AVX-512, modern CPUs are going to do 8x as many operations per cycle as your ancient Opterons and only widen the performance gap even further.

A 9900K will be similar to a node, Ryzen 3950X a chassis, and the top end Threadripper will be the equivalent of both chassis. All of which are single CPUs.

If you came here for confirmation from a bunch of people and are not open to input, while I can't speak for the rest, I don't think you're going to get what you're after.
LOL you told me to return the server, and now you claim I'm the one not open to input? Thanks, that was super helpful. It's pretty obvious from the get go I wanted advice on integrating these systems on my home network with an emphasis on performance --- because, spoiler alert, they've already been purchased. So your so called "input" was a moot point from the get go.

Your answer on the 9900K is still a little wishy washy. Did you do the math? I doubt it. And that's a good call on your behalf! LOL Because it obviously doesn't come out in favor of your supposition.

And enough with the hardware comparisons! I will ask a second time, do you not understand the expression

Beauty is in the eye of the beholder.

Now, I would ask you to refrain from posting here if you have nothing positive or constructive to add. All you've done is muddy the waters and bench race CPUs with bad and/or missing data.
 
  • Like
Reactions: Fritz

Storm-Chaser

Twin Turbo
Apr 16, 2020
151
25
28
Upstate NY
And just FYI this build definitely has a high performance component, and that frame of reference apparently goes right over your head.

Satisfy the need for speed The pure volume of calculations calls for performance, all the way from the processor to the I/O bandwidth. The PowerEdge C6145 is one of the highest performing 2U rack servers ever, with two 4-socket AMD Opteron® 6200 series processors. These processors have up to 84 percent higher performance with up to 73 percent more memory bandwidth.1 It’s not just the FLOPS in the PowerEdge C6145 server that make the difference. It can also accommodate up to 1T of memory.
 

BlueFox

Legendary Member Spam Hunter Extraordinaire
Oct 26, 2015
2,091
1,507
113
So, that's a 10 year old marketing brief from Dell. Do you know how quickly computer components go obsolete? Even when those CPUs were released, they were still slower than Intel's offering at the time.

I told you that your best option is to return those servers as you are going to be spending ~$200 a month powering the pair at load. Even if you threw them in the dumpster and took the $600 loss, you would quickly make that back in electricity savings.

If you want performance, they are not it. The numbers speak for themselves:

Opteron (double since it's half the sockets and multiply by 1.1 for the extra clock speed, or ~18600) - PassMark - [Dual CPU] AMD Opteron 6176 SE - Price performance comparison
9900K - PassMark - Intel Core i9-9900K @ 3.60GHz - Price performance comparison
Ryzen 3950X - PassMark - AMD Ryzen 9 3950X - Price performance comparison

Anyway, best of luck with your 4U of space heaters. I doubt you will get better advice from anyone else here.
 
Apr 9, 2020
57
10
8
@Storm-Chaser I totally get where you're coming from with the sentimentality angle. I didn't even know you could find 8-socket systems that easily, now you've got me shopping for C6145s on ebay! For some of us, the novelty of being able to point at a machine and say "that has 8 physical CPUs in it" is totally worth the space/time/cost/electricity.

Also I do have to disagree with BlueFox, 192 physical cores will definitely out-preform in some areas. That's why they build supercomptuers around # of cores not individual clock-speeds. It may not look superior on paper but for the right application it would definitely still shine.
 
  • Like
Reactions: Storm-Chaser

Storm-Chaser

Twin Turbo
Apr 16, 2020
151
25
28
Upstate NY
Update:
Just Ordered the two 1200W power supply's that run on 110v - needed for my house, so I should have no problem with our power delivery system once these arrive.

In terms of electrical load, and power management, we will have each C6145 server plugged into separate 20 Amp circuits dedicated to that room for this specific purpose (I JUST didn't know when). So we are almost ready to boot them up right here on the bottom floor. Ping Pong table covered with computer parts.
 

BlueFox

Legendary Member Spam Hunter Extraordinaire
Oct 26, 2015
2,091
1,507
113
@Storm-Chaser I totally get where you're coming from with the sentimentality angle. I didn't even know you could find 8-socket systems that easily, now you've got me shopping for C6145s on ebay! For some of us, the novelty of being able to point at a machine and say "that has 8 physical CPUs in it" is totally worth the space/time/cost/electricity.

Also I do have to disagree with BlueFox, 192 physical cores will definitely out-preform in some areas. That's why they build supercomptuers around # of cores not individual clock-speeds. It may not look superior on paper but for the right application it would definitely still shine.
Couple points. They're not 8 socket systems. It's two 4 socket systems in one chassis. Having 8 CPUs in a 2U chassis is cheap and common. I've owned a number of 1366, 2011, 2011-3, and 3647 Supermicro ones and they're everywhere on eBay.

192 cores will not outperform the things I mentioned, be it a single Threadripper 3990X, pair of Ryzen 3950X, or four 9900Ks. They are dead slow by today's standards. In single-threaded performance, a modern Atom CPU is faster. There are actually Atom CPUs that are faster even in multi-threaded performance! Having all those extra cores does not compensate.

Supercomputers are also not built with slow, high core count CPUs. They use fast, high core count CPUs (for example the top x86 based one in the US uses Xeon Platinum 8280s).
 
  • Like
Reactions: mateojra and edge
Apr 9, 2020
57
10
8
I've owned a number of 1366, 2011, 2011-3, and 3647 Supermicro ones and they're everywhere on eBay.
What models do you search to find something like that? I am a bit of an intel-snob, I won't deny.

Supercomputers are also not built with slow, high core count CPUs. They use fast, high core count CPUs (for example the top x86 based one in the US uses Xeon Platinum 8280s).
The point you apparently missed is that, for specific applications, more cores are better than faster cores, period. I used to be part of an SMP-enthusiasts community(before it died, RIP). One of the guys on their had an ancient Dell Poweredge 8450, 8-way P3 Xeon beast. 8 500mhz processors, 100mhz RAM. That 20-year-old machine still out-preformed modern systems on certain benchmarks.

In any case, @BlueFox, you're really not winning any popularity contests by ragging on this guy and his hobby. Benchmarks really don't mean anything when you're just trying to have some fun with some equipment you happen to be passionate about. I personally happen to really love any system with more than 2 sockets just because its cool. I wish I'd gotten my hands on an ALR 6x6 when they could be had fairly easily. Its ok for people to like things that aren't the most optimal, and want to do stuff with them even if its not the most efficient way to do them.


@stormchaser what OS do you like for many cores these days? I dunno what windows 10's tops out at. Last time I really knew this stuff well was in the Windows 2000-era.
 

Storm-Chaser

Twin Turbo
Apr 16, 2020
151
25
28
Upstate NY
Couple points. They're not 8 socket systems. It's two 4 socket systems in one chassis. Having 8 CPUs in a 2U chassis is cheap and common. I've owned a number of 1366, 2011, 2011-3, and 3647 Supermicro ones and they're everywhere on eBay.

192 cores will not outperform the things I mentioned, be it a single Threadripper 3990X, pair of Ryzen 3950X, or four 9900Ks. They are dead slow by today's standards. In single-threaded performance, a modern Atom CPU is faster. There are actually Atom CPUs that are faster even in multi-threaded performance! Having all those extra cores does not compensate.

Supercomputers are also not built with slow, high core count CPUs. They use fast, high core count CPUs (for example the top x86 based one in the US uses Xeon Platinum 8280s).
Are you finished yet?

Again, and as usual, you seem totally misguided and you are missing the mark. This build isn't about contrasting or measuring it's capabilities against current, or newer hardware. ITS ABOUT HAVING FUN. I very much enjoy tinkering with equipment that cost well in excess of $5000 when new... It's about creating a core monster out of curiosity, learning as much as I can, and using some the most high performance servers in their day to pull this off.... IE PERFORMANCE IS MEASURED REALATIVE TO SYSTEM (CPU) PLATFORM, NOT to current tech. There are lots of people out their who have a preconceived notion that current hardware should serve as the mandatory or primary benchmark to rate someone else's project or hardware (case in point LOL)... the gold standard, if you will. This is an ignorant position on so many levels. Believe it or not, there are some computer enthusiasts out there who hold this hardware in high regard because they don't measure it's worth by current tech standards. So in a sense, you are the one with the restrictive lens for judging it based on current tech, and that will ultimately limit your creativity on so many levels because you carry that lens around with you where ever you go.

Like I've said before there will be naysayers - of which you are the first, so you should feel honored.

You obviously have a very "ignorant" view of the C6145 and that's a shame because it's an exceptional piece of hardware.

I should also have you know that I am a computer hoarder/collector... I search the web high and low for interesting or exceptional hardware all the time. I collect computers and computer parts. At this moment it will be the C6145 as my most recent acquisition. Since you have current tech on your brain as the mandate, you've lost sight of what true computer enthusiasm is all about. And that's a shame.
 

BlueFox

Legendary Member Spam Hunter Extraordinaire
Oct 26, 2015
2,091
1,507
113
You mentioned performance repeatedly:
That being said, I am planning to do some deep learning, folding at home or other similar type workloads. I really do want to push this hardware to the absolute limit to maximize overall performance. I don't want it to just sit there and look pretty. So I am all ears in this regard.

I guess what I am saying is that my network is a clean slate. I have just presented some of the hardware that we can potentially onboard to turn this into an interesting and productive challenge. Like I said earlier, my main and only goals here are to maximize and optimize my servers so they are effectively working together to deliver the highest performance possible. So I am ready and willing to accept advice here... on the network, on hardware, on servers, on GPGPU stuff and all related subjects.

I want to build a high performance cluster here without compromise.
So, I told you how to attain it. You were immediately unwilling to accept any advice and took it as a personal attack.

Like I stated in my second reply, if performance, cost, functionality, etc are not the goals, then no one can offer you any advice or suggestions as what you value will be entirely specific to you. You hold in high regard what many would consider to be unremarkable as it's not a particularly unique, rare, or niche item, so we cannot read your mind.
 
Apr 9, 2020
57
10
8
I've found a kindred spirit.

So @Storm-Chaser lets ask this: what do you like to do? I know folding@home specifically has both CPU-intensive and GPU-intensive work-units. I'm really new to the whole f@h scene, but that could be a fun way to play with/compare various performance factors. I would definitely say folding is more worthwhile than say seti@home, but mostly cuz I don't believe in aliens :p

Have any other weird things in your collection? I love my 32-port IP KVM switch lol
 

Storm-Chaser

Twin Turbo
Apr 16, 2020
151
25
28
Upstate NY
You mentioned performance repeatedly:

So, I told you how to attain it. You were immediately unwilling to accept any advice and took it as a personal attack.

Like I stated in my second reply, if performance, cost, functionality, etc are not the goals, then no one can offer you any advice or suggestions as what you value will be entirely specific to you. You hold in high regard what many would consider to be unremarkable as it's not a particularly unique, rare, or niche item, so we cannot read your mind.
This is an honest question... I don't want to beat a dead horse, but what exactly makes you think Im not interested in performance / throughput / bandwidth with this build?
 
Apr 9, 2020
57
10
8
This is an honest question... I don't want to beat a dead horse, but what exactly makes you think Im not interested in performance / throughput / bandwidth with this build?
My guess is he thinks because your definition of "optimal" does not match his, that makes your goals not worth pursuing. Kind of an annoying/arrogant standpoint, but some people are like that.


Anyone, to actually hopefully stear the conversation back to the original question/goals. Have you thought about doing something wild with the network? maybe go to infiniband or fibre channel instead of boring old copper LAN? I've been contemplating doing the same thing myself. Might be a fun project.

Also, just spitballing in the "Fun" category: what about turning the machines into a cluster? Then you could legitimately tell people you have a super computer. Its small and it's inefficient, but it counts!
 
  • Like
Reactions: Storm-Chaser

Storm-Chaser

Twin Turbo
Apr 16, 2020
151
25
28
Upstate NY
My guess is he thinks because your definition of "optimal" does not match his, that makes your goals not worth pursuing. Kind of an annoying/arrogant standpoint, but some people are like that.


Anyone, to actually hopefully stear the conversation back to the original question/goals. Have you thought about doing something wild with the network? maybe go to infiniband or fibre channel instead of boring old copper LAN? I've been contemplating doing the same thing myself. Might be a fun project.

Also, just spitballing in the "Fun" category: what about turning the machines into a cluster? Then you could legitimately tell people you have a super computer. Its small and it's inefficient, but it counts!
No doubt. This guy totally missed the boat, and I think he's a lost cause at this point.

In any event, the show must go on. So I spent the last two hours on amazon ordering more equipment for this project (we are getting closer!!!). This is what I just ordered:

- Two 1100W 110v PSUs for use in the home (Both C6145s will come with 220v PSUs)
-Dell C4130 GPU server (dual proc)
- Five Sata III 512GB Solid State drives. These will serve as OS drives ONLY on each server for blistering performance at every level.
-Including the recently purchased C4130, the server count involved in this project is up to 5.

We have:
*C4130 GPU host server (dual socket) (SSD powered OS)
*C6145 Server #1 (dual socket) (SSD powered OS)
*C6145 Server #2 (Dual socket) (SSD powered OS)
*HP ProLiant DL360p gen 8 (dual socket) (SSD powered OS)
*HP Microserver Gen10. (SSD powered OS) (We now have 3TB drives to populate all for slots for storage... t. Also upgraded the unit to 16GB of DDR4 RAM earlier today.

Not to mention the second HP z820 that I can throw into the mix if needed...

As you can see we are sparing no expense in maximizing the performance potential of these servers. But apparently even this probably doesn't count as "high performance" in the eyes of some. LOL

Next step is going to be tackling the issue of RAM (ideally I want to populate all 64 slots for max memory bandwidth in both C5145 Servers. Do you think 1TB of RAM is going to be enough? :)

Phase after that will be acquiring 4 GPUs to use in conjunction with the C4130 GPU server.

So as you can see, we are coming at this with a high performance emphasis at every angle. But some would say it's not enough, probably thinks he's a server god or something. Wouldn't be the first time I've seen people take their own words as "gospel" LOL.
 

BlueFox

Legendary Member Spam Hunter Extraordinaire
Oct 26, 2015
2,091
1,507
113
This is an honest question... I don't want to beat a dead horse, but what exactly makes you think Im not interested in performance / throughput / bandwidth with this build?
You bought some 10 year old servers?

As you also only bought a single drive for each C6145, I don't think you realize that it's two nodes per chassis, and as such, you will need two.
My guess is he thinks because your definition of "optimal" does not match his, that makes your goals not worth pursuing. Kind of an annoying/arrogant standpoint, but some people are like that.
I have made no allusions as to what is optimal. The OP made one goal very clear (in case you missed it), which was performance. It appears to me that they did not care for my advice as it did not provide them with the validation they were looking for.

Performance metrics are not my opinion. They are facts that cannot be argued. Every statement I have made in regards to them can be validated. I encourage you to refute any of them with evidence.