Trying to build my new server, should I wait?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

segma

New Member
Feb 11, 2022
4
3
3
I have dual E5-2697 v2 now but the single core performance just too low and multiple core adds up the power consumption is worst than A single i9-12900K, and unfortunately my case single core performance is significant, so I am wondering if there is something will go terribly wrong that I don’t have ECC memory sticks and A motherboard that supports it.Should I wait for W680/690 motherboards till more mass production then I buy?(edited: yes my server “never” gonna shutdown even my daily pc too, they do have ups…
 

T_Minus

Build. Break. Fix. Repeat
Feb 15, 2015
7,762
2,141
113
Very unlikely something is going to go terribly wrong without ECC.

If you're on Dual E5 v2 now though going to the consumer Intel are you going to miss PCIE lanes? What about a newer gen intel server CPU that has > Single core performance? Or AMD 12 core and motherboard that supports ECC?

Could you offload your need for single core performance to a lower power Intel E-2xxxx server and keep the dual E5 for other stuff?

Just throwing out some ideas :D
 
Last edited:
  • Like
Reactions: abq

segma

New Member
Feb 11, 2022
4
3
3
Very unlikely something is going to go terribly wrong without ECC.

If you're on Dual E5 v2 now though going to the consumer Intel are you going to miss PCIE lanes? What about a newer gen intel server CPU that has > Single core performance? Or AMD 12 core and motherboard that supports ECC?

Could you offload your need for single core performance to a lower power Intel E-2xxxx server and keep the dual E5 for other stuff?

Just throwing out some ideas :D
1. Within the long time in the past, the Ecc is bothering me. Because you see, the old system that uses DDR2/3 without ecc, it just won’t work too well running long period time, some weirded bugs will going on, I don’t actually know how to describe it thoug.
2. Yeah, about that, I actually don’t care, a few months ago when I was trying to build a AIO machine, all my server, router, and my daily drive pc , all in one. And the problem shows up, E5‘s single core IPC performance just toooooo low, even can’t express the full power of my GPU, moving the mouse, opening Chrome, NOT even close to my old 4th Gen Intel E3, now I switched to 12th Gen as my daily drive pc, no more AIO, they all have their own function, so the answer still, I AM NOY GONNA MISS IT.
3. Hmmmmm, I dig some search, the latest Intel workstation CPU is W-1390P, and it got a great single core performance, I love it, but the price is just not worth it, to me. W-1390P basically just a lower base freq i9-11900K with ECC memory support, and the performance is not better than the newer i9-12900K, the price is higher than it too.
4. Can’t do, the software is not open source, unfortunately I can’t provide more details about the app, but u can think of it is A CSGO online game.(well the CSGO using D9 old engine and EXTREMELY hunger for single core performances.
 

unwind-protect

Active Member
Mar 7, 2016
540
216
43
Boston
I'd wait for the W680 boards. I mean they are already on the vendor's websites, it can't take that much longer.

What v2 CPUs do you have? Maybe you can buy some with higher clock speed.
 
  • Like
Reactions: Markess

ajr

New Member
Oct 14, 2019
7
5
3
In the process of migrating a server from a pair of E5-2697 v2 to a Ryzen 5900x with an ASRock Rack board. Going from 256GB to 128GB of ram is a little tight, but most of that was used by ceph caching, so it wont have much of an impact on VMs and Containers. The reduction in PCIe lanes is a little annoying, but everything I needed still had enough lanes.
The 5900x is faster than the pair of E5-2697 v2's. The single thread is almost double, and there's no penalty for needing to cross the QPI link. Plus if you have encrypted storage, the new CPUs handle the encrypt/decrypt much faster. The power consumption is much less with the Ryzen too. The Intel V2 were great and aged well if one needed a high core count and cheap LR DIMMS on the used market. But modern desktop parts can run circles around them while using a fraction of the power.
 
Last edited:
  • Like
Reactions: T_Minus and Markess

Markess

Well-Known Member
May 19, 2018
1,187
806
113
Northern California
3. Hmmmmm, I dig some search, the latest Intel workstation CPU is W-1390P, and it got a great single core performance, I love it, but the price is just not worth it, to me. W-1390P basically just a lower base freq i9-11900K with ECC memory support, and the performance is not better than the newer i9-12900K, the price is higher than it too.
I just upgraded my desktop/workstation from a Broadwell E5 to an Alder Lake i5-12600K and the difference it quite noticeable. The initial Alder Lake CPUs at least all support ECC. Single thread performance seems important to you: the i5-12600K benchmarks higher than the W-1390p in both single and multi-threaded. It also comes very close to the i9 in single thread and is much less expensive than either the W-1390p or the i9-12900k . Comparison from the PassMark site

I'd wait for the W680 boards. I mean they are already on the vendor's websites, it can't take that much longer.

What v2 CPUs do you have? Maybe you can buy some with higher clock speed.
Waiting for W680 is a good idea. With Ivy Bridge E5s, going to two higher clocked (and lower core count) will only give a small gain, so may not be worth it.

The 5900x is faster than the pair of E5-2697 v2's. The single thread is almost double, and there's no penalty for needing to cross the QPI link. Plus if you have encrypted storage, the new CPUs handle the encrypt/decrypt much faster.
Plus the 5900x, and motherboards to support it, are already available now.
 
  • Like
Reactions: segma

nabsltd

Well-Known Member
Jan 26, 2022
569
403
63
The reduction in PCIe lanes is a little annoying, but everything I needed still had enough lanes.
The v2 still make great storage servers for non-encrypted data, since you have so many PCIe lanes, and generally don't need the CPU.
But, I have noticed how bad the single-thread performance is, especially on poorly written code. I have two programs that do the same basic checksum, and one uses bad single thread code, and the other uses good multi-threaded (since the use case is often multiple files at the same time), and the difference is night and day.
The problem I'm having with the latest generation is that I can barely find motherboards with enough slots for a GPU, an SAS RAID card, and one other card (even an x4 slot) that still allows me to use all the devices on the motherboard (M.2, USB 3.0, etc.). If it wasn't for the video editing, I could offload my local storage to the network, but even 10Gbps isn't enough for really a really snappy feel.
 

Markess

Well-Known Member
May 19, 2018
1,187
806
113
Northern California
The problem I'm having with the latest generation is that I can barely find motherboards with enough slots for a GPU, an SAS RAID card, and one other card (even an x4 slot) that still allows me to use all the devices on the motherboard (M.2, USB 3.0, etc.). If it wasn't for the video editing, I could offload my local storage to the network, but even 10Gbps isn't enough for really a really snappy feel.
Yeah, once you get used to having/using 40+ PCIe lanes on the CPU, it's a real challenge figuring out how to work around having only 20-24. The W680 boards will have more configuration options...so long as vendors don't stick one of the limited number of PCIe slots where it will be covered up by a, by now almost ubiquitous, 2 slot wide GPU. All because they needed room for two PCI slots (how many people need 2 PCI slots any more?) at the end of the board. Yeah, I'm looking at you ASrock!.

Luckily for me, my workflow recently changed. So for my new build, I didn't really need ECC and was able to shed a second GPU and another card, allowing everything else to fit using a Z690 board.
 
Last edited:

unwind-protect

Active Member
Mar 7, 2016
540
216
43
Boston
The power consumption is much less with the Ryzen too. The Intel V2 were great and aged well if one needed a high core count and cheap LR DIMMS on the used market. But modern desktop parts can run circles around them while using a fraction of the power.
Do you happen to have numbers for idle power consumption for the two systems?
 

ajr

New Member
Oct 14, 2019
7
5
3
Plus the 5900x, and motherboards to support it, are already available now.
This is definitely another reason. The V2 hardware is approaching 10 years old, and it needed to be replaced sooner rather than later. Under high loads, the system was locking up. Difficult to prove out, but the leading suspicion is VRMs becoming marginal under the higher load. Plenty of other software causes, but that can also be solved by setting up a new system. One could maybe wait for more DDR4 based systems to show up in the usual used market places, or wait until DDR5 based systems are more common, but couple it with a two failed SSDs, and available now won out.

The v2 still make great storage servers for non-encrypted data, since you have so many PCIe lanes, and generally don't need the CPU.
But, I have noticed how bad the single-thread performance is, especially on poorly written code. I have two programs that do the same basic checksum, and one uses bad single thread code, and the other uses good multi-threaded (since the use case is often multiple files at the same time), and the difference is night and day.
The problem I'm having with the latest generation is that I can barely find motherboards with enough slots for a GPU, an SAS RAID card, and one other card (even an x4 slot) that still allows me to use all the devices on the motherboard (M.2, USB 3.0, etc.). If it wasn't for the video editing, I could offload my local storage to the network, but even 10Gbps isn't enough for really a really snappy feel.
PCIe lanes are a big selling point to these older server platforms. In my opinion, cheap memory and PCIE lane count is keeping these platforms around. Buying DDR4 ECC UDIMMs isn't the cheapest, but I don't see DDR5 being cheaper any time soon if you want more modern CPUs. If the desktop platforms had four more lanes on either the chipset or the CPU, it would go a long way. Bifurcation has come a long way in the last couple of years to allow the user to divide the lanes to better fit their own application, but is still not quite the replacement for having 40+ lanes. There are some more creative solutions like the QNAP cards STH reviewed awhile ago
QNAP QM2-2P10G1TA Adapter General Purpose Mini Review (servethehome.com)
QNAP has plenty of variations on their website, but can get kind of expensive, and can saturate the upstream bandwidth it has.
This STH forum thread has a lot on information on add in cards that include a PCIe switch.
Multi-NVMe (m.2, u.2) adapters that do not require bifurcation | ServeTheHome Forums
They can be somewhat expensive and power hungry, but might bridge the gap between a desktop like platform and full server/workstation platform.
I didn't need 10G or many USB for my use case, just one HBA and some NVME, so the desktop based Ryzen won out on $/performance.

Do you happen to have numbers for idle power consumption for the two systems?
I don't have great numbers, but the old system (2x 2697V2, 16x 16GB DDR3 LRDIMMS, 3x Intel 750 Series SSDs, and 36 HDDs) would idle around 570-590 Watts. The server isn't all transferred over yet, but the 5900x based one is looking to be closer to 400W based on what the IPMI is reporting. It also has some newer M.2 SSDs that take less power than the old Intel 750 series. In total, about a 150W - 180W savings, which seems reasonable to me considering the change in CPU, SSDs, and memory.
 

zer0sum

Well-Known Member
Mar 8, 2013
881
494
63
Do you happen to have numbers for idle power consumption for the two systems?
I have a Ryzen 5600 setup and a Xeon E5-2620 v3 running and they use about ~35W and ~110W respectively at idle.
They are very different systems though, so it's hard to make a fair comparison

Ryzen 5600 is a Proxmox host with X470D4U, 128G memory, 2 x nvme, 1 x sata, 1 x Mellanox CX3
Xeon 2620 v3 is an Unraid server with an X10SRL-F, 64G memory, 1 x Nvidia P400, 2 x SATA ssd's, 2 x nvme, 2 x LSI HBA's, 4 x 10TB SAS and 2 x 4TB SATA drives, 1 x Mellanox CX3

I think the Xeon systems idle power draw is pretty impressive given all the hardware :p