So I just pulled the trigger on two Dell PowerEdge C6145 servers. These were so cheap I could not pass them up. It worked out to about $600 with shipping for both of them. Besides, they have my favorite CPU underhood, namely, the AMD Opteron 6180 SE. The reason I say favorite is because I stuck with my consumer grade Phenom II x6 until about a year ago and it was hands down my favorite CPU over the years. So almost a decade on that platform and it never missed a beat. Matter of fact, the processors still have serious crunching power and only thing really lacking are modern instruction sets. So when I saw these servers for $145 each, each outfitted with 8 of the fastest K10 processors ever made, I had to buy them. You might be surprised to learn that the hexacore 1100T Phenom II still goes for well over $200 on eBay so it's rare to find a 10 year old processor that still holds its value like that. So the value is there. And God knows I would never favor the Opteron 6200 series even if I could get more cores in the same space with that family. I think we all know why. Nobody in their right mind would ever put an FX chip into a server if they cared one iota about performance and IPC. The FX is a horrifically underperforming CPU that is only good if you want to see the novel 5.0GHz in the lower right hand corner of your desktop. lol
In any event I have been deep in thought as to how I am going to put these servers to good use. And that is why I am here. I would like to get feedback on how to best set this up and what you guys might think is the best topology given my existing network and hardware. At the moment I don't have any specific form of research ready to be deployed to these servers. They came out of the blue so I've been scrambling trying to build out what I need to make this a successful project. Also note, I would also like to harness the GPU compute power to the best of my ability. More on that below.
Part of that success is going to be maximizing GPU computational power (GPGPU type stuff) across the board… Initially, I had my heart set on the supplemental PowerEdge C410x (external 3U PCIe enclosure for up to 16 GPUs) to work with my new C6145s, but there are bandwidth limitations there and performance bottlenecks so it's not ideal. Not to mention performance per watt is lacking and I would need a 220v electrical system in my house to run the damn thing.... And plus I searched high and low and couldn't find a single one for sale anywhere on the internet.
So my research has brought me to what I think might be some potentially interesting middle ground. And that goes by the name of The Dell PowerEdge C4130. This is different from Dell's previous c410x GPGPU solution in a number of ways. First, it is far more efficient than the C410x, better performance per watt and much better overall computational power because the GPUs are placed internally without the need for an external PCIe ipass cable. This is also a dual socket DDR4 platform and one a 1U C4130 server can house up to 4 GPUs.
I was thinking for now I would start with just one C4130 (can be had for about $850 on amazon) and purchase 4 Nvidia GPUs (about $100 each) for use in the enclosure to get some perspective on where I want to go with this project and how much I want to lean on GPGPU computational power relative to my 16 12 core Opteron 6180 SEs.... test the waters... so to speak.
So Guys. I have a budget of about $3400 left remaining for this build if we take off the initial $600, plus $80 for four new 1100W 110V PSUs to work with my houses' electrical system. That means I would like to gain some measure of advanced performance from all of this new hardware. I guess the crux of the problem is you probably don't know how to help me if I don't know myself exactly what I am going to be doing with the hardware infrastructure that I will be putting into place.
That being said, I am planning to do some deep learning, folding at home or other similar type workloads. I really do want to push this hardware to the absolute limit to maximize overall performance. I don't want it to just sit there and look pretty. So I am all ears in this regard.
To give you a little background, I have been massing servers in the home for quite some time. Perhaps you guys can help me with my network architecture so we have a clean running, efficient and high performance operation here? Please, let me know what you think.
Existing network infrastructure:
1GB ethernet in the house. Decent internet connection, about 10Mgs down and 5 up.
I have one HP DL360p gen 8 1U server that I was thinking could be my domain controller for my little network here. This server has 8 10K SAS drives in a RAID O configuration for maximum performance (I know, I know, I said RAID O, but dont worry, my data is safe). This server has two six core E5 processors at 2.5 GHz and 64GB of 1333MHz ECC registered DIMMS (16 memory modules in total). 2TB total disk space.
Second server is a HP ProLiant ML360 Gen 6 with slightly more dated hardware. Also a dual socket server, with 16GB of DDR3 memory and 4 15K RPM SAS drives. This thing is dated but I'm sure I can still put it to good use in some capacity or another.
I have two HP Z820's, both dual socket LGA 2011 workstations, one has 24 physical cores and 48 threads at 3.5GHz turbo combined with 64GB of octal channel DDR3 1866MHz memory plus 4 SSDs in RAID 0 for maximum throughput. The other z820 has 24GB ram and SSD and I am currently waiting on two 4.0GHz E5 2600 v2 processors from China to finish that build.
I also have a Dell T7500 running Two Xeon X5690 3.4GHz (3.7GHz turbo) CPU and another SSD for the boot drive.
Lastly, I have an HP MicroServer Gen10 enclosure (currently waiting on four 3TB HDDs).. I have this outfitted with an SSD drive and 16GB of DDR4 G.Skill non ECC memory that runs at 2133GHz. This will be my file server of sorts. So we have some potential here, I just need your help to put it all together in the most effective way possible.
I guess what I am saying is that my network is a clean slate. I have just presented some of the hardware that we can potentially onboard to turn this into an interesting and productive challenge. Like I said earlier, my main and only goals here are to maximize and optimize my servers so they are effectively working together to deliver the highest performance possible. So I am ready and willing to accept advice here... on the network, on hardware, on servers, on GPGPU stuff and all related subjects.
And just to fill you in, my new C6145 servers DO NOT come with RAM or HDDs, so I am going to have to build these up. It seems easier sometimes just to buy the cheap HP proliant 360 gen8 servers and harvest the RAM and HDDs that way... because I can get a server for about $300 with 64GB 1333MHz ECC DDR3. Versus a 128GB RAM only kit which will run me around $400... Two birds, one stone, almost.
In a perfect world, I would want to run 1600MHz DDR3 in 4GB modules to capitalize on all the memory bandwidth my new systems have to offer. So that would be 32 slots per server for a total of 64 slots that need to be populated if I want to build this out to the best of my ability.
Given my budget and hardware constraints, I am all ears for any and all recommendations on not only the network, but the server topology as well. I want to build a high performance cluster here without compromise. It's nice to work with this old equipment because it's all so cheap, and I have a little breathing room here with another $3400 left over to complete the project.
In any event I have been deep in thought as to how I am going to put these servers to good use. And that is why I am here. I would like to get feedback on how to best set this up and what you guys might think is the best topology given my existing network and hardware. At the moment I don't have any specific form of research ready to be deployed to these servers. They came out of the blue so I've been scrambling trying to build out what I need to make this a successful project. Also note, I would also like to harness the GPU compute power to the best of my ability. More on that below.
Part of that success is going to be maximizing GPU computational power (GPGPU type stuff) across the board… Initially, I had my heart set on the supplemental PowerEdge C410x (external 3U PCIe enclosure for up to 16 GPUs) to work with my new C6145s, but there are bandwidth limitations there and performance bottlenecks so it's not ideal. Not to mention performance per watt is lacking and I would need a 220v electrical system in my house to run the damn thing.... And plus I searched high and low and couldn't find a single one for sale anywhere on the internet.
So my research has brought me to what I think might be some potentially interesting middle ground. And that goes by the name of The Dell PowerEdge C4130. This is different from Dell's previous c410x GPGPU solution in a number of ways. First, it is far more efficient than the C410x, better performance per watt and much better overall computational power because the GPUs are placed internally without the need for an external PCIe ipass cable. This is also a dual socket DDR4 platform and one a 1U C4130 server can house up to 4 GPUs.
I was thinking for now I would start with just one C4130 (can be had for about $850 on amazon) and purchase 4 Nvidia GPUs (about $100 each) for use in the enclosure to get some perspective on where I want to go with this project and how much I want to lean on GPGPU computational power relative to my 16 12 core Opteron 6180 SEs.... test the waters... so to speak.
So Guys. I have a budget of about $3400 left remaining for this build if we take off the initial $600, plus $80 for four new 1100W 110V PSUs to work with my houses' electrical system. That means I would like to gain some measure of advanced performance from all of this new hardware. I guess the crux of the problem is you probably don't know how to help me if I don't know myself exactly what I am going to be doing with the hardware infrastructure that I will be putting into place.
That being said, I am planning to do some deep learning, folding at home or other similar type workloads. I really do want to push this hardware to the absolute limit to maximize overall performance. I don't want it to just sit there and look pretty. So I am all ears in this regard.
To give you a little background, I have been massing servers in the home for quite some time. Perhaps you guys can help me with my network architecture so we have a clean running, efficient and high performance operation here? Please, let me know what you think.
Existing network infrastructure:
1GB ethernet in the house. Decent internet connection, about 10Mgs down and 5 up.
I have one HP DL360p gen 8 1U server that I was thinking could be my domain controller for my little network here. This server has 8 10K SAS drives in a RAID O configuration for maximum performance (I know, I know, I said RAID O, but dont worry, my data is safe). This server has two six core E5 processors at 2.5 GHz and 64GB of 1333MHz ECC registered DIMMS (16 memory modules in total). 2TB total disk space.
Second server is a HP ProLiant ML360 Gen 6 with slightly more dated hardware. Also a dual socket server, with 16GB of DDR3 memory and 4 15K RPM SAS drives. This thing is dated but I'm sure I can still put it to good use in some capacity or another.
I have two HP Z820's, both dual socket LGA 2011 workstations, one has 24 physical cores and 48 threads at 3.5GHz turbo combined with 64GB of octal channel DDR3 1866MHz memory plus 4 SSDs in RAID 0 for maximum throughput. The other z820 has 24GB ram and SSD and I am currently waiting on two 4.0GHz E5 2600 v2 processors from China to finish that build.
I also have a Dell T7500 running Two Xeon X5690 3.4GHz (3.7GHz turbo) CPU and another SSD for the boot drive.
Lastly, I have an HP MicroServer Gen10 enclosure (currently waiting on four 3TB HDDs).. I have this outfitted with an SSD drive and 16GB of DDR4 G.Skill non ECC memory that runs at 2133GHz. This will be my file server of sorts. So we have some potential here, I just need your help to put it all together in the most effective way possible.
I guess what I am saying is that my network is a clean slate. I have just presented some of the hardware that we can potentially onboard to turn this into an interesting and productive challenge. Like I said earlier, my main and only goals here are to maximize and optimize my servers so they are effectively working together to deliver the highest performance possible. So I am ready and willing to accept advice here... on the network, on hardware, on servers, on GPGPU stuff and all related subjects.
And just to fill you in, my new C6145 servers DO NOT come with RAM or HDDs, so I am going to have to build these up. It seems easier sometimes just to buy the cheap HP proliant 360 gen8 servers and harvest the RAM and HDDs that way... because I can get a server for about $300 with 64GB 1333MHz ECC DDR3. Versus a 128GB RAM only kit which will run me around $400... Two birds, one stone, almost.
In a perfect world, I would want to run 1600MHz DDR3 in 4GB modules to capitalize on all the memory bandwidth my new systems have to offer. So that would be 32 slots per server for a total of 64 slots that need to be populated if I want to build this out to the best of my ability.
Given my budget and hardware constraints, I am all ears for any and all recommendations on not only the network, but the server topology as well. I want to build a high performance cluster here without compromise. It's nice to work with this old equipment because it's all so cheap, and I have a little breathing room here with another $3400 left over to complete the project.
Last edited: