Intel Ice Lake Xeon Or Amd Epyc Milan?For SQL Sever

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

wyzdic

New Member
Jul 22, 2019
1
0
1
I want to assemble a server, in order to run SQL Server faster, the plan is to buy Milan, but a seller tells my database application to choose Intel will be more fast, I am confused, which one should I choose?
 

i386

Well-Known Member
Mar 18, 2016
4,221
1,540
113
34
Germany
"it depends"

With an unlimited budget you could get intel systems with 8 cpu sockets and support for 12+ TByte ram (I wrote this because you didn't post about budget or requirements :D)

With a limited budget you have to analyse your workloads and see what's the bottleneck and if you can fix or change it (eg not enough ram -> more io to slower storage -> get faster storage or more ram)
 
  • Like
Reactions: Bert

986box

Active Member
Oct 14, 2017
230
41
28
44
Why not use one of the cloud vendor? It may not be the cheapest route but you dont not need to deal with space and backup generator.
What version and edition are you planning to use?
 

KarelG

Member
Jan 29, 2020
48
13
8
Use cloud vendors and benchmark yourself both cpus. It really depends, I'm inclined to bet on Epyc here due to its bigger L3, but it really depends. Size is not everything and I'm not sure how big hot ram for your queries will be. If bigger than either L3, then even Intel may be faster under some conditions.
 

edge

Active Member
Apr 22, 2013
203
71
28
It really depends on your specific work load. Higher clocks are a benefit for transactional work while higher core counts and more pcie throughput are a huge advantage for data warehouses and reporting servers if the app is properly written and the server properly configured.
 

dbTH

Member
Apr 9, 2017
149
59
28
It really depends on your specific work load. Higher clocks are a benefit for transactional work while higher core counts and more pcie throughput are a huge advantage for data warehouses and reporting servers if the app is properly written and the server properly configured.
It's other way round. In general, higher core counts for OLTP workload ( if there's no DB licensing constraint) and higher CPU clocks for DWH and reporting servers
 

edge

Active Member
Apr 22, 2013
203
71
28
It's other way round. In general, higher core counts for OLTP workload ( if there's no DB licensing constraint) and higher CPU clocks for DWH and reporting servers
Inserts and updates are single threaded. Faster clocks and faster disks mean you can get to a point where you release the locks sooner, thus reducing contention and increasing transaction throughput. This does not show up in tests like TPCC because there is less than 2% cross referrencing between queries which makes for artificially low contention compared to almost all real life databases.

Read queries which are predominant in reporting and DW can be highly parallelized on SQL Server by having a data partition per core. Of course, to get the benefit of parallelization, you need to have each partition capable of providing data at the MCR (maximum consumption rate) of the processor. Effective parallelization of queries mean performance scales the data throughput at (number of cores x MCR) . MCR does not necessarily scale linearly with clock rate as it is dependent on the throughput from memory to the core. So, increasing the number of cores from 8 to 16 doubles throughput while increasing clockspeed of 8 cores from say 2.4 Ghz to 2.8 Ghz will at most get you a 20% performance boost (though probably less as there are challenges in scaling storage system throughput without increasing thread count).

I would point you to the MS Datawarehouse Fast Track reference architecture for understanding DW design for stand alone servers .
 

986box

Active Member
Oct 14, 2017
230
41
28
44
If its for OLAP, do yourself a favor by looking into Snowflake or Synapse/Databricks. Its a better solution overall.
SQL for OLTP can scale well if you can keep business rules at middle layer.
Between AMD and Intel, AMD has higher compute density.

For small and medium size shops, its better to use cloud services. It lets you scale quickly when needed.
 

edge

Active Member
Apr 22, 2013
203
71
28
If its for OLAP, do yourself a favor by looking into Snowflake or Synapse/Databricks. Its a better solution overall.
I would be really interested in you filling this idea out with specifics as to why you believe this true. I remember hearing all the bull about hadoop and data lakes, so I am a bit of a skeptic when I see new technologies pushed without any logic or details other than "I say it". My interest is genuine and not hostile - I have people asking me about snowflake and before I take a few months learning it deep, I'd like to know why I should. I am not asking for an exhaustive long post, just a 40 floor elevator pitch (2 minutes, throw fast balls).
 

986box

Active Member
Oct 14, 2017
230
41
28
44
If you have perform any web analytics, the growth of the data is much faster than traditional DW. For Snowflake the cost of storage is cheap. You are only billed for your workload. Time travel feature lets you recover to a point in time. You do not need a team to help manage the environment.
You can ingest data through either internal or external stages. External stage supports AWS S3, Azure Blob and Google buckets. Snowpipe will automatically ingest data posted to stage. Supports ANSI SQL. You can host it any of the big 3 cloud platforms. Data is stored encrypted which meets insurance requirement for ransomware.

We negotiated a better deal with Azure. So are migrating to Snapyse/Databricks. Its still a little rough around the edges. Most of the data is stored data lake in blob storage. Using live data view to access. Microsoft is still working on some performance issue.

Compared to Hadoop, you dont have to manage the multi-node cluster. You just need Data Engineers and data analysts.
 
Last edited:
  • Like
Reactions: edge