I'm curious what motherboard would you try and pair that with?10 core i9-10900K now at 400USD. Getting really tempted to bite.
I think at some point in time you have to weigh the hardware costs vs the power consumption. 10-core all-turbo on the older chips would consume a lot more power than the newer chips to get the same degree of performance? Are you running the 2689v4s on dual CPU? Otherwise, lane support should be the same?I think Intel peaked with the Broadwell EP (Xeon E5-v4). With many servers being pulled now, there's a lot of cheap used parts on the market, which are hard to beat in reality. If you need lots of cores, E5-2696 v4 or 2699v4 are super cheap and nice, reliable stable parts.
I'm running a few multi-GPU workstations on 2689v4. While not as turbo-y fast as the latest and greatest parts, 10-cores, 3.7ghz all-core turbo with all the PCIe lanes and ECC support, on a single-ring, single PCIe root complex is hard to beat, especially at the price. They're reliable as all get-out and run nice and cool. Even at max-wattage synthetic workload, they only need a quiet air cooler to stay at full turbo indefinitely.
I think at some point in time you have to weigh the hardware costs vs the power consumption. 10-core all-turbo on the older chips would consume a lot more power than the newer chips to get the same degree of performance? Are you running the 2689v4s on dual CPU? Otherwise, lane support should be the same?
I was looking at it and definitely see the difference now. I just didn't expect an i9 to only have 16 lanes.Well, the i9-10900k (Comet Lake) 10c/20t, only has 16 PCIe lanes, and lacks ECC support, is built on 14nm process node and has a spec-sheet 125w TDP (which is basically fiction - read the many reviews about how hot it runs...).
Intel has been stuck at 14nm process node since Broadwell (the E5-2600 v4 family is also 14nm). They've enhanced it in several generations, but it is what it is.
Now if you don't need the PCIe lanes and don't need ECC support, or large memory support, the 10900K is a nice chip. It is very zippy and a heck of a gaming processor. But you'd also do well to look at AMD's offerings in this segment.
On the other hand, the Broadwell EP Xeon isn't as fast and it is 4-years older and also built on 14nm process. But it has 40 PCIe lanes and ECC support, twice the memory bandwidth and can support 10x the memory size, and of course, it can be used in 2-CPU configs.
The E5-2689v4 (10c/20t) has a base clock of 3.1ghz and a max turbo of 3.8Ghz, and an all-core turbo of 3.7ghz, it isn't as fast on CPU intensive benchmarks and workloads, but it can move data around a lot better, and it can support multiple GPU's, high-speed networking and NVMe at the same time. That's a lot of what a Xeon is all about. It also runs very cool, in spite of the 165w maximum TDP.
That's where things get funky in the Intel lineup today on the server/workstation side. We have Xeon Wxxxx which is Core X-extreme 10900x with ECC turned on, or Xeon Scalable Gold, and neither is really much better. Xeon W2255 10c/20t has 48 pcie lanes a bit higher clock (4Ghz all-core turbo). Xeon Gold 6246 is 12c/24t, 4.1ghz all core turbo. These all have the new Skylake mesh architecture, which has some NUMA implications for some things.
AMD's offerings look very interesting by comparison in all of these segments.
It just all depends on what you need and what's important.
I was looking at it and definitely see the difference now. I just didn't expect an i9 to only have 16 lanes.
I guess a Ryzen 5600X/5800X would be much better than the 10th Gen for server usage since they have ECC support and 24 PCIe 4.0 lanes. But unless you're doing 4x GPU gaming setups, 16 lanes should technically be sufficient for calculations?
Lack of ECC is definitely a deal breaker for non desktop usage.
By big GPUs does the RTX 3090 count? Because I will not be using any of the professional cards with 48GB+ memory. If anything I might get 2x3090s and NVLink them. So 2x x8 for GPU from the CPU and 2x x4 for NVMe from the chipset?Well, you know, those lanes can get used up pretty quick with big GPU's, 10gb, 25gb or even 100gb networking and lots of NVMe storage.
It just sort of depends on what you need to do with it.
AMD has really come along on all fronts.
Yeah personally for a pair of 3090s I'd want 2x x16 pcie 3.0 slots minimum, preferably pcie 4.0.By big GPUs does the RTX 3090 count? Because I will not be using any of the professional cards with 48GB+ memory. If anything I might get 2x3090s and NVLink them. So 2x x8 for GPU from the CPU and 2x x4 for NVMe from the chipset?
Then again, no ECC means no chance for virtualization/ZFS hypervisor. What would I need 8-10 cores for then?
The i7-10700K was at $260 earlier. Is there really no good purpose for this crazy pricing other than a boring desktop gaming machine?Yeah personally for a pair of 3090s I'd want 2x x16 pcie 3.0 slots minimum, preferably pcie 4.0.
You might get away with x8 pcie 3.0 but it's not ideal.
Well, it really depends on what you want to do with it.By big GPUs does the RTX 3090 count? Because I will not be using any of the professional cards with 48GB+ memory. If anything I might get 2x3090s and NVLink them. So 2x x8 for GPU from the CPU and 2x x4 for NVMe from the chipset?
Then again, no ECC means no chance for virtualization/ZFS hypervisor. What would I need 8-10 cores for then?
I have two systems in my home:Well, it really depends on what you want to do with it.
Systems engineering is always more complicated than buying parts and putting them together. Nothing wrong with a killer gaming rig, if you want to game. There's good call for some professional systems, like SolidWorks, to go with a fast gaming processor and a lower-tier Quadro RTX4000 (professional card). That would make a killer fast rig for that. But I like ECC memory for systems where money is made.
When I first built these workstations, they were 4-way 1080ti systems because that was the best I could do. But in the RTX era, it is 2 cards and a high-speed Mellanox card so I can do distributed computing. Just not terribly practical to build RTX in 4 slots for various reasons. I'd love to have Quadro RTX8000 or the new A6000 cards. But budgets are budgets.
The 8-10 core CPU's are good for my purposes. CPU loads aren't high for training up a deep learning model. But they come in handy for all the data prep that goes on, which GPU's aren't terribly helpful for. Just lots of python code doing that.