So I have an old Cisco UCS C240 M3 Server 2x E5-2680 128gb ram.
My question is would it be worth it to pick one of these up at this price to replace my dated server? (are there better options out there that would be in the same price range) This just seems like a huge step up.
thanks.
Isn't the answer always, "it depends on your use case"? Hehe.
In short, Yes, the memory controller of that generation of Sandy Bridge CPUs is pretty limited where as the memory bandwidth of Skylake-SP greatly exceeds that of your old system (and PCIe 3.0!). The Cores are a significant leap in technology and raw performance (and Ghz speeds!) between those two generations, however power usage has gone up IMO. And then you have PCIe 3.0, which alone can help with bottlenecks (I recently hit the PCIe bandwidth limit of my x8 PCIe 2.0 HBA card! Had to upgrade to an PCIe 3.0 HBA just to get the bandwidth out of the ZFS array).
However... The big catch here, and why the chassis is so cheap, is that there is ZERO documentation, support, nor even firmware support for this server.
We are on our own in figuring out what works and doesn't. How to fix things and whatever quirks come up.
That is why this exact thread exists, to centralize all of us hacking on the same box and what we discover, posted in this thread.
For example, I liked the server because of the OCP ports and cards that I have. However, during my research of having the server, I've found out that the OCP slots are not normal slots allowing for general OCP cards - they have several restrictions that pretty much makes them useless, except for the "custom heatsink" version of the dual SFP28 OCP card it comes with.
Also, those are the only NIC connections: two cages of SFP+. So your switch needs at least SFP (1G) to connect. You could also get some SFP+ -> 10GBaseT adapters as well. Or just pickup a cheap switch with some SFP+ ports, while you wait for SFP28 and 100Gbps switches to come down in price as that's what is the next milestone for us STH gurus. 10G is getting old and easily maxes out with two old SATA SSDs in RAID0. 25GbE is the next step for us home labbers as the cost is really dropping (there's a huge 100GbE switch in the For Sale search right now, for like $500-$600).
Right now, the price of the server and CPUs is an investment - a risk that you may not even use the server because of some limitation, or some crushed hope. If you want to accept the risk, because of the inexpensive price, yes it's worth the upgrade.
For me, the price of the Skylake-SP CPUs are just a bit of reach for me. So I'm picking up the oldest, cheapest, $75 CPUs I can find to get the box running. I am personally willing to wait years, many years, for the CPUs to drop in prices. The E5-2699 V4 king of all LGA2011 CPUs (22C @ 2.6 Ghz all turbo) used to be a $2000 processor on eBay back in 2014. Now it's $200... 8 years later.
Yeah, I know your generation of Sandy Bridge, Ivy Bridge, Broadwell-E, etc. I've had over a dozen boards and systems for both generations of LGA2011 and 2011-3 for over a decade. I still have a few of your generation servers running, a couple pairs of Ivy Bridge-E E5-2670 V2 and one pair of E5-2695 V2s from a nice STH member. I also have an Asus RIVE Black Edition overclocking an 8-core E5-1680 V2 to 4.4 Ghz. Your old generation works perfectly fine for VMs and processes. Good powerful systems with lots of cores, and dirt cheap! Where they are dated is in their raw Ghz Speed numbers (Plex and vGPU games like very high Ghz single CPU cores over lots of cores) and memory bandwidth (some DB apps wants to cache a lot in ram), not to mention lots of features like M.2 and Optane persisted memory sticks and frankly having PCIe for your NVMe drives is beyond a night and day difference. NVMe M.2 and PCIe cards really tidy up a system too - no cables!
in summary, it's a complete risk. But a fun one if you like to tinker.
Getting started: This S-SKUD 2 model overall seems to require some tinkering to even get started, to reset the IPMI and BIOS, and some firmware updates of the SFP28 card to even get it started and controllable. There's also no rails yet that we've identified (we're looking!).
Once you get over that, it works easily for 8x 3.5 HDDs (I'm fitting 10x!) and two PCIe Full Height, Half Length (FHHL) slots - which is quite nice. You can add both a SAS3 HBA PCIe card AND a 100 GbE fiber card, with dual big CPUs, 12x mem sockets, and 8/10 3.5" drives - all in a 1U space. That's quite impressive really.
There are other 1U systems like Dell and Quanta that actually have 3x or even 4x PCIe HHHL slots, even with an additional OCP/QCT port. So it's not unheard of.
Remember, it's a 1U with dual PCIe slots and holds 8/10x 3.5", is louder, and uses costly CPUs.
Do your research on Skylake-SP 1st gen and 2nd gen CPUs. They will be your biggest bar of entry for this server. And don't get the F CPUs.
Patrick has made two great STH articles about Skylake value add models, and which ones makes sense to get.