Threadripper Pro memory question

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Wasmachineman_NL

Wittgenstein the Supercomputer FTW!
Aug 7, 2019
1,872
617
113
Considering the highest end TR Pro has 64 cores and TR Pro itself has 2TB memory support, what kind of applications require 32GB (!) of memory per core? (2097152 MiB divided by 64 cores = 32GB per core if my math's correct)
 

EffrafaxOfWug

Radioactive Member
Feb 12, 2015
1,394
511
113
I've seen it mentioned several times before, so just want to come out and say that "memory per core" isn't any sort of meaningful metric as physical memory and core count are two entirely orthogonal values.

You should just gauge what sort of memory footprint a given workload has - whether it can scale across X cores is another matter entirely. For example, it's very common to see database loads that benefit hugely from fitting as much of the DB in to RAM as possible (it's not unusual to see database servers with 2TB of memory these days), but will often hit single-threaded bottlenecks from inefficient queries or bad indexes. On the flip side, it's easy for me to generate a video workload in ffmpeg that'll thrash all of my 16 cores but will use less than 2GB of memory.

There's every sort of approach in between those two extremes though, it depends entirely on the application and dataset. There are plenty of workloads that benefit from lots of memory and lots of CPUs of course - virtualisation being one of the most widespread examples.
 
  • Like
Reactions: Wasmachineman_NL

Wasmachineman_NL

Wittgenstein the Supercomputer FTW!
Aug 7, 2019
1,872
617
113
@Wasmachineman_NL there are 12 core tr pros which also support 2TB ram :p

In addtition to what EffrafaxOfWug said more ram (and os caching) helps avoiding "slow" IO operations like reading from ssds or hdds.
Even more ridiculous, that's ~180 gb of RAM per core. Guess that's mostly for DB usage?
 

Spartus

Active Member
Mar 28, 2012
323
121
43
Toronto, Canada
My main use case can be like this at times. Sometimes as I throw more cores at the problem the RAM usage and solve time grows faster than the speedup and available resources so I have to under-utilize the cores to allocate enough RAM. Recently I was using 1/8th of available cores per machine because I basically needed 64GB / core to efficiently solve the problem without substantial complexity growth. (needed roughly 1.25TB total).

P.S. The software is also licenced per core, so uh yeah, TR pro is excellent in the use case. Actually I've been using EPYC 7F52 as well (64 cores with 75% disabled to give high frequency for 16 cores with all 256MB cache.
 
  • Like
Reactions: Wasmachineman_NL

lihp

Active Member
Jan 2, 2021
186
53
28
... what kind of applications require 32GB (!) of memory per core? (2097152 MiB divided by 64 cores = 32GB per core if my math's correct)
  • used as a server there are many use cases: from zfs file server, DBs,...
  • AI - GPU computing nodes . storing the massive amount of data from the GPUs (app. 20+ GPUs are possible per Workstation), analyzing and only writing relevant data to SAN/IB store
  • Supercomputer nodes, especially weather data and astrophysics simulations, but also some rare molecular simulations
 
  • Like
Reactions: Wasmachineman_NL

Evan

Well-Known Member
Jan 6, 2016
3,346
598
113
32gb / core is actually not a bad ratio for a lot of enterprise ESX,KVM systems, you would be surprised how little cpu those VM’s use.
use a lot of 8-core and 768gb (96gb per core) servers for DB servers as well.
 
  • Like
Reactions: Wasmachineman_NL