SQL Server Query Tuning and Optimization (Packt)
Article Index
SQL Server Query Tuning and Optimization (Packt)
Chapters 2 - 6
Chapters 7 - 9
Chapters 10 - 12, Conclusion

Chapter 2 Troubleshooting Queries 

This chapter builds on the tools discussed in the first chapter, giving you additional tools and techniques to monitor, troubleshoot, and tune problem queries. 

The chapter opens with a more detailed look at several useful Dynamic Management Views (DMVs) and Functions (DMFs), specifically: 

  • sys.dm_exec_sessions (shows accumulative resource usage for session)

  • sys.dm_exec_requests (shows current resource usage of running SQL)

  • sys.dm_exec_query_stats (shows historical accumulative resource usage) 

Useful code is provided to find the most expensive individual queries, as defined by average-CPU use and total_worker_time. Although SQL Trace is deprecated, it’s still widely used and is very familiar to database professional. A section briefly discusses setting up monitoring using the SQL trace.

The next section discusses Extended Events (XEs), these are the preferred lightweight monitoring tool. Events, predicates, actions, and targets are briefly explained. Some very useful code is provided that converts SQL Trace events to XEs (sys.trace_xe_event_map joined to sys.trace_events). This is followed by a walkthrough setting up an XE session using the New Session Wizard GUI. Data can be examined in real-time using the Watch Live Data facility, XQuery code is also provided to extract data from the XE session. Code to find the wait states associated with a given query is also provided. 

The chapter ends with a look at the Data Collector. This is typically used for long running monitoring. A walkthrough is provided showing how it is set up, and how to use its reports (for %CPU, memory, disk I/O, and network usage). 

This was a very helpful chapter, discussing various tools (DMVs, SQL Trace, XEs, Data Collector), showing how and why each should be used. The mapping of SQL Trace events to XE events should prove useful to database professionals that are new to XEs. 

Some immediately useful code is provided, including identifying the most expensive queries, and finding the wait states for a given query – however no reason for using the latter was provided.

Chapter 3 The Query Optimizer 

This chapter expands on the query optimizer steps discussed earlier. The optimizer generates various execution plans, within the limited time, and selects the plan with the lowest cost. 

In overview, the steps can be outlined as:

parsing  → binding → simplification → trivial plan → stats loading → heuristic join reordering  → exploration stage 0 → exploration stage 1 → exploration stage 2 → convert to executable plan

The first four steps do not need access to the database content.

The DMV sys.dm_exec_query_optimizer_info is discussed, it contains useful information about various stages of the optimization process. Although accumulative, with careful manipulation, it can be used for single queries or workloads. The optimizer steps are discussed next.

Parsing checks the SQL syntax, not the objects. Binding ensures the objects/columns exist, together with appropriate permissions. The output of this step is an algerbrizer tree, which contains logical operators that relates to the underlying SQL. The trees are not documented, but it is possible to get some understanding via the DMV sys.dm_xe_map_values and trace flags – examples are given.

Simplification involves making the trees simpler, enabling easier optimization. Steps include: converting subqueries to joins, removing redundant joins, and contradiction detection. Useful example code and discussion is provided for each.

The purpose of the trivial plan step is to identify any obvious best plan, if there is, the optimization process can be terminated. The OptimizationLevel element of the plan shows if a trivial plan has been used.

The optimizer uses transformation rules, based on relational algebra, to generate alternative but equivalent plans, some of which are cheaper. These alternatives are stored in the Memo structure. The DMV sys.dm_exec_query_transformation_stats gives some information on the various rules. It’s possible to enable/disable transformation rules globally via undocumented DBCC RULEON/RULEOFF, and at the statement level using OPTION QUERYRULEON/OFF -examples are provided of their usage and their impact on plans. The optimizer needs statistics for cardinality estimates, undocumented trace flags 9292 and 9204 can be used to discover what statistics are loaded.

If a trivial plan is not created, then full optimization is undertaken. This has 3 stages, with different rules applied at each stage. Since many potential plans can be generated, heuristics are used to reduce the potential number. If a good enough plan is found at any stage, the optimization process is terminated, and the best plan selected. The reason for exiting the optimization process is recorded in the StatementOptmEarlyAbortReason element of the plan. The stages are:

  • Search 0 (tries to find plan quickly without complex transformations. Ideal for small queries)
  • Search 1 (called quick plan. Limited join reordering, can look if parallel query better)
  • Search 2 (called full optimization. For complex to very complex queries. Larger set of transformations rules, advanced optimization strategies. Plan must be found here)

This was an interesting chapter, containing lots of insight (and inference) on how the optimizer works. Many undocumented trace flags and options are given to gain a greater insight into the optimization process. The author rightly warns these should be used for educational purposes.

This chapter does a good job of explaining the various steps in the optimization process. Useful code is provided to support the assertions made. There’s a useful diagram showing the optimization process. The use of DMV snapshots to make inferences is a useful technique. The example code relating to contradiction detection and foreign key join elimination were particularly insightful.

Part of the problem with this chapter is the optimization process is only partly documented – so the chapter has the do the best with what is available, and does an admirable job.

Chapter 4 The Execution Engine

The query operators in the plan perform the actual work, while there are many operators, here the discussion concentrates on the major ones: data access, aggregation, joins, parallelism, and updates.

The chapter opens with a look at data access operators, these access tables or indexes. The specific operators discussed, with examples, are:

  • Scans (read the entire structure)
  • Seeks (looks for one or a few rows)
  • Bookmark lookups (data not in index, so goes to underlying Clustered Index row)

Next aggregates, which summarise information, are examined. There are two types, stream aggregates and hash aggregates. The specific operators discussed, with examples, are:

  • Sorting and hashing (both build hash table, sorting can spill to tempdb)
  • Stream aggregate (from plan, stream aggregation of AVG = COUNT / SUM, checking for 0)
  • Hash aggregate (wants sorted input. big tables, not sorted.)
  • Distinct sort (distinct can be rewritten as a GROUP BY)

The section on joins discusses the three join types, and states that no one type is best – but depends on the particular circumstances. The specific operators discussed, with examples, are:

Nested loop (small input, for each outer row loop around all rows in inner table. So cost proportional to size of outer input multiplied by size of inner input)

  • Merge (wants sorted input, can sort if cheaper, eqi-join only. Max cost is sum of both inputs)
  • Hash (similar to merge join but input don’t need to be sorted. Cost is sum of both inputs)

Next parallelism is examined. If there are at least 2 CPUs, running the query with parallelism might be cheaper. The “max degree of parallelism” and “cost threshold for parallelism” server configuration options are discussed. Additionally, parallelism can be restricted/increased at the query level using the MAXDOP hint. There’s an interesting discussion of features that may prevent the use of a parallel plan. Helpful examples are provided.

The last section looks at the update process, this may involve: updating indexes, validating constraints/RI, running trigger, and running cascading updates. The process can become quite complex as the number of tables increases. Per-row and per-index plans are discussed, together with the how the Halloween problem is overcome.

This was another insightful chapter, concentrating on the major operators that perform the actual work. This chapter should prove useful in deciphering your own plans. There’s some instructive code showing how similar SQL can produce differing plans based on row count estimates. One example shows either a nested loop join or a merge join being used, another example shows a key lookup or a Clustered Index scan being used – based on the row count estimates of the supplied input.

It’s instructive to see how indexes and row counts can influence the operators used, showing the importance of indexes and statistics – these are discussed in the next 2 chapters.

Chapter 5 Working with Indexes

Indexes are often the primary means of improving query performance. This chapter discusses the different types of indexes, their structure, how they affect performance, together with missing indexes, and the effect of index fragmentation.

The chapter opens with a summary of the different types of indexes. Heaps, clustered indexes, non-clustered indexes, unique indexes, and primary keys are discussed with useful examples. The properties of clustered indexes (i.e. unique, narrow, static, and increasing) are discussed with examples. Both covered and filtered indexes are discussed.  

Next index operations are examined, including navigating through B-trees to find data via a scan or a seek. Various predicates are examined (=, <, > IN, BETWEEN, != etc), followed by a helpful discussion showing why an index might not be used.

The Database Engine Tuning Advisor (DTA) is discussed next. This tool takes a query or workload as its input, and determines which indexes should be created or removed to enhance the query’s performance. The examples show how the plan cache can be used, to identify which indexes should be created/removed. An interesting section on using the DTA to tune a production database via a test server is discussed. It might have been useful to say that Query Store can also be used as an input to the DTA.

Next missing indexes are examined, details of these are included within individual execution plans, and also accumulatively via the missing index DMVs. There are some limitations, and these are discussed in a provided website link.  

Index fragmentation occurs when the logical row order doesn’t match the physical row order, it can be detrimental to the performance of scans. The DMF sys.dm_db_index_physical_stats can be used to determine the amount of defragmentation, typically indexes with >30% defragmentation are rebuilt, and those between 10% and 30% are reorganised. Lastly, code is provided to identify unused indexes by querying the DMV sys.dm_db_index_usage_stats, these can be considered for removal since while they are not used for retrieval, they are likely to degrade the performance of updates.

This was a more traditional chapter, detailing information that’s mostly available elsewhere, but needed here for completeness, besides it very well written. The author makes the valid point of ensuring you test any recommendation on your own system instead of blindly applying them. The section using a test server to tune a production workload was very useful.

I was surprised the section on unused indexes didn’t warn readers that the index might be needed quarterly etc, so ensure further analysis is done before you remove indexes.

Chapter 6 Understanding Statistics

Database statistics contain details about the distribution and density of column values, they are used as input to the optimizer to provide a cardinality estimate (number of rows), and thus affect the plan cost. It is very important the statistics are up to date. The chapter opens with a discussion about the importance of having useful statistics to help the optimizer produce a good plan.

The section on creating and updating statistics shows the importance of keeping the default database configuration option for auto_create_stats and auto_update_stats. It is noted that index rebuilds result in up-to-date statistics with 100% sampling. Statistics are typically automatically updated after a given threshold is exceeded.

There’s a useful section on inspecting stats objects using the DBCC SHOW_STATISTICS command. The objects contain 3 pieces of information: density, histogram and string statistics, which are examined in detail, and examples relate it to the cardinality estimate in the plan.

Since SQL Server 2014, there has been a new cardinality estimator that can give better performance, especially if the columns are dependent. If performance degrades, it is possible to use the previous cardinality estimator – examples are provided of using both.

Incremental statistics, filtered statistics, statistics on ascending columns, and statistics on computing columns are all described with helpful examples. Perhaps one of these will solve an unusual performance problem you might have.

Since statistics are vitally important in creating a good plan, it is important that they are kept up to date.  The default automatic updating of statistics may not be adequate. It is possible to update your statistics more regularly, and providing a bigger sample.

This chapter provided a comprehensive discussion concerning all aspects of database statistics. In addition to being wide ranging, useful example code is provided throughout to illustrate the concepts. The detailed example showing how the cardinality estimate is derived from the statistics was very useful.  It might have been helpful to have provided a routine that shows the current state of the statistics, for example: when were the stats last updated and what percentage of the rows have changed since the last stats update, but this is a minor point.



Last Updated ( Wednesday, 23 November 2022 )