Throw memory and CPU at it within reason, licensing constraints if any considered as was mentioned. Vast majority of DB performance problems are due to coding and/or sql and not hardware sizing, however. Best thing you can do for a database is find out top x statements by execution time, improve either code or sql, rinse and repeat.
Find out usage profile, OLTP vs OLAP or combo, what is the driving force behind performance - high transaction count (sales orders etc), analytics or something else. Is/will most of the data become warm/cold data after a while or is the entire growing dataset being changed?
Is this purely for DB or is there a middleware component utilizing same server? If later, identify requirements for each, not necessarily to split them up but to understand what needs more attention.
Size alone is no indication of performance requirements, people just like large numbers. Have 2-4+TB OLTP DBs which work just fine with 3-4vCPU in production (with barely ~40% of that reserved) and 48G RAM but also 300GB DBs with 48CPUs that definitely needs the compute power.