Microsoft SQL Performance Tweak?

Discussion in 'Linux Admins, Storage and Virtualization' started by Albert Yang, Apr 1, 2018.

  1. Albert Yang

    Albert Yang New Member

    Joined:
    Oct 26, 2017
    Messages:
    26
    Likes Received:
    0
    Hi,
    I was wondering if someone else has any tips for the best Performance for MSSQL inside Proxmox. I was reading this
    Code:
    Trace Flag T8038 with Microsoft SQL Server
    Setting the trace flag -T8038 will drastically reduce the number of context switches when running SQL 2005 or 2008.
    
    To change the trace flag:
    
    Open the SQL server Configuration Manager
    Open the properties for the SQL service typically named MSSQLSERVER
    Go to the advanced tab
    Append ;-T8038 to the end of the startup parameters option

    Currently this is my fsync
    Code:
    root@prometheus:~# pveperf /vmdata
    CPU BOGOMIPS:      63967.52
    REGEX/SECOND:      3043825
    HD SIZE:           5537.01 GB (vmdata)
    FSYNCS/SECOND:     35669.25
    DNS EXT:           95.68 ms
    DNS INT:           74.55 ms (mydomain.local)
    Running ZFS RAID 10 with 32gigs of ram with the VM no cache (as on the website on performance tweaks supposedly its the best)
    I also been reading that MSSQL uses alot of Writes rather then reads or am i wrong?

    I also gave me arc
    Code:
    # Min 2048MB / Max 4096 MB Limit with 32 gigs of ram
    options zfs zfs_arc_min=2147483648
    options zfs zfs_arc_max=4294967296
    Also on Iperf from Proxmox to VM running 2012r2 with SQL 2016

    Code:
    root@prometheus:~# iperf3 -c 192.168.3.201
    Connecting to host 192.168.3.201, port 5201
    [  4] local 192.168.3.252 port 54380 connected to 192.168.3.201 port 5201
    [ ID] Interval           Transfer     Bandwidth       Retr  Cwnd
    [  4]   0.00-1.00   sec  1.52 GBytes  13.0 Gbits/sec    0    221 KBytes
    [  4]   1.00-2.00   sec  1.57 GBytes  13.5 Gbits/sec    0    221 KBytes
    [  4]   2.00-3.00   sec  1.71 GBytes  14.7 Gbits/sec    0    221 KBytes
    [  4]   3.00-4.00   sec  1.69 GBytes  14.5 Gbits/sec    0    221 KBytes
    [  4]   4.00-5.00   sec  1.47 GBytes  12.6 Gbits/sec    0    221 KBytes
    [  4]   5.00-6.00   sec  1.69 GBytes  14.5 Gbits/sec    0    221 KBytes
    [  4]   6.00-7.00   sec  1.77 GBytes  15.2 Gbits/sec    0    221 KBytes
    [  4]   7.00-8.00   sec  1.79 GBytes  15.3 Gbits/sec    0    221 KBytes
    [  4]   8.00-9.00   sec  1.79 GBytes  15.4 Gbits/sec    0    221 KBytes
    [  4]   9.00-10.00  sec  1.73 GBytes  14.9 Gbits/sec    0    221 KBytes
    - - - - - - - - - - - - - - - - - - - - - - - - -
    [ ID] Interval           Transfer     Bandwidth       Retr
    [  4]   0.00-10.00  sec  16.7 GBytes  14.4 Gbits/sec    0             sender
    [  4]   0.00-10.00  sec  16.7 GBytes  14.4 Gbits/sec                  receiver
    
    Thank you
     
    #1
  2. Dimholdt

    Dimholdt New Member

    Joined:
    Feb 14, 2017
    Messages:
    3
    Likes Received:
    1
    Hi Albert,

    MS SQL Databases can be either read intensive, write intensive or both. It really depends on the database design and the application(s) that connect to the database.

    A couple of things to think about.
    MS SQL databases have a minimum of 2 files. The data file which includes meta-data such as table definitions, stored procs, data, etc... AND a transaction log file which is used as a "scratch" file until data is committed to the data file.

    In a typical configuration, the data file would be placed on RAID 10 storage and the transaction file on the fastest storage available. Avoid putting the data files and transaction file on the same drive or LUN.

    Data files can be read or write intensive. Transaction Log file are almost always write intensive.

    If possible configure SQL Server and the database files in the following manner.

    Data files on RAID 10.
    Transaction log files on RAID 10
    Tempdb files on RAID 10

    Also, please note that SQL Server stores its data in 8K pages (or leafs). This cannot be changed.
     
    #2
  3. Albert Yang

    Albert Yang New Member

    Joined:
    Oct 26, 2017
    Messages:
    26
    Likes Received:
    0
    Thanks for the reply, It really made me to understand better, As for proxmox because im going to virutalize the data storage with RAID ZFS So when creating the VM i would use the disk as default cache or write back. Would i also create 3 virtual Disks? one or the OS the other for the database and the 3rd for the transaction log? Or is there something i can tweak on proxmox or ZFS?

    Thank you
     
    #3
  4. Dimholdt

    Dimholdt New Member

    Joined:
    Feb 14, 2017
    Messages:
    3
    Likes Received:
    1
    I don't have any experience with proxmox but I have used SQL Server extensively in a vSphere environment.

    Data integrity in any relational database system is paramount. Disk caching can and will be disastrous to a database due to loss of power. Microsoft strongly recommends turning off disk caching.

    Yes, at a minimum I would create 3 virtual disks for SQL Server (1 for the data files, 1 for the transaction log files, and 1 for tempdb). At the very least separate the transaction log files to another LUN.

    here is an example....

    I: Drive for Data File(s)
    S: Drive for Transaction Logs
    T: Drive to Tempdb files.

    You REALLY want to keep IO from these files on separate LUN(s) and on separate pools if your goal is to maximize performance.

    If this is a functional testing or dev environment, then you can place all the files on the same LUN.
     
    #4
    Albert Yang likes this.
  5. Albert Yang

    Albert Yang New Member

    Joined:
    Oct 26, 2017
    Messages:
    26
    Likes Received:
    0
    Thanks for the reply, as this did help me alot. So i have a ZFS pool with the VM Disks I would just need to add 3 drives to separate the information.
     
    #5
  6. mrkrad

    mrkrad Well-Known Member

    Joined:
    Oct 13, 2012
    Messages:
    1,214
    Likes Received:
    45
    I'd check for tempdb usage. It causes lots of backlogs when you have 1 tempdb file and say 4 cores, you may want to spread your tempdb across 4 lun's and put the tempdb log files somewhere else.

    If you are using SSD as storage, log's can go on the same lun's from several databases, but it is not great practice due to possibility of corruption!

    each lun gets a separate disk queue and can be called concurrently so it is better to have them spread out over separate controllers each with their own lun dedicated to the task to achieve maximum speed.

    Also SQL server is read intensive only when the database doesn't fit in ram, if it all fits in ram, then it will be just log/tempdb write intensive!
     
    #6
    Albert Yang likes this.
  7. Albert Yang

    Albert Yang New Member

    Joined:
    Oct 26, 2017
    Messages:
    26
    Likes Received:
    0
    Thank you for the reply, this helped me alot to think about how im going to restructure for the SQL server. Again thank you so much
     
    #7
  8. BLinux

    BLinux Well-Known Member

    Joined:
    Jul 7, 2016
    Messages:
    1,249
    Likes Received:
    256
    @Dimholdt Thanks for sharing your experience in this... I don't often work with MS SQL, but mostly with MySQL and the best practice is very similar. When you say split the data file and transaction log on different LUNs, I assume you also mean LUNs backed by different disks? Since if all LUNs were backed by the same pool of disks, you would still be sharing the same resource for both data and transaction log...

    also, even if both LUNs are backed by the same physical disks, is there still some advantage if the LUNs have different block sizes? Is there an optimal block size for storing the data file vs the transaction log?
     
    #8
  9. Evan

    Evan Well-Known Member

    Joined:
    Jan 6, 2016
    Messages:
    1,994
    Likes Received:
    284
    Advantage of different FS/LV even on a single VG (raid group) due to queue contention etc.
    Think of big SAN’s where data is so ready but the LUN’s and filesystems are allocated seperatly.
     
    #9
  10. Dimholdt

    Dimholdt New Member

    Joined:
    Feb 14, 2017
    Messages:
    3
    Likes Received:
    1
    @BLinux
    Yes, I do mean LUNs backed by different disks. The idea is to isolate IO between the data files and transaction LUNs. This is especially important if there are multiple databases on the same SQL Server instance.

    Its hard to say about block sizes. It really depends on the database workload. Is it highly transactional? lots of small writes? Or lots of large writes? Wide tables with XML data? Or is the database going to be primarily a read intensive design?

    A database can also have a mixed workload which can make it very difficult to find an optimal block size. It really boils down to testing configurations with a workload sample.

    Something else to consider...index performance can improve or suffer depending on the file system configuration.

    What we did in our environments was to quit worrying about the perfect disk configuration and use RAID 10 for everything. Next we just scaled out using multiple file groups with multiple files.

    For example, one file group for the smaller tables where IO wasn't as critical. A second file group for one or two of the larger tables where there is high IO so that we can isolate the IO away from other tables.

    For example...

    Filegroup #1
    4 Data files for low IO tables and put each file on a separate LUN

    Filegroup #2
    4 Data files for HIGH IO tables and put each file on a second set of LUNs.

    So....8 LUNs for the data files and one LUN for the transaction log. You can continue to scale as needed but please note that at some point you will reach a point of diminishing returns. Back in the SQL 2008 days it was at about 16 files where it was no longer useful to keep adding files.

    Hope this helps.
     
    #10
Similar Threads: Microsoft Performance
Forum Title Date
Linux Admins, Storage and Virtualization Multiple IP's / virtual machines / performance Feb 15, 2018
Linux Admins, Storage and Virtualization Three virtualization setups: best performance? May 8, 2017
Linux Admins, Storage and Virtualization Poor ZFS performance after omnios update Mar 23, 2017
Linux Admins, Storage and Virtualization Bad mdraid0 performance Mar 10, 2017
Linux Admins, Storage and Virtualization CEPH write performance pisses me off! Jan 25, 2017

Share This Page