SQL Server Query Performance Tuning (4th Ed)
Article Index
SQL Server Query Performance Tuning (4th Ed)
Chapters 6 -14
Chapters 15 -22
Chapters 23 - 26, Conclusion

 

Chapter 15 Execution Plan Cache Behavior

The chapter opens with a look at the use of the DMV sys.dm_exec_cached_plans to obtain information about cached plans. Creating a plan costs resources (CPU etc), so caching a plan should mean subsequent runs of the query use less resources. Sometimes queries are not cached, techniques are examined to reduce this, and these include the “optimize for ad hoc workloads” setting and Forced Parameterization.

Various recommendations are made to improve the likelihood of plan caching, these include: 

  • Explicitly parameterize variable parts of a query

  • Use stored procedures to implement business functionality

  • Use sp_executesql over EXECUTE for dynamic queries

  • Avoid modifying environment settings between connections 

This chapter provides useful instructions that enable you to produce reusable execution plans. Useful sample code is provided to illustrate the points being made. There’s a good point about not running DBCC FREEPROCCACHE on production systems since it can cause spikes in resource usage.

Chapter 16 Parameter Sniffing

The values of the parameters provided when a query is first run, are used to create the cached plan. It should be noted that this is generally a good thing. A problem can arise if these parameter values are atypical, subsequent runs of the query can be slow due to an inappropriate plan being used. This is the problem of parameter sniffing. A similar problem can occur if the statistics are stale. The chapter provides details on how to identify the problems of parameter sniffing i.e. intermittent or just bad performance.

This chapter provides a good description of what parameter sniffing is, how it goes bad, how to identify it, and how to correct it (e.g. sp_recompile, assign parameters to local variables etc).

Chapter 17 Query Recompilation

Query recompilation can be good and bad, good because it can pick up a new index, bad because it might be unnecessary and thus waste resources. The chapter starts by looking at how to identify the SQL statement responsible for the recompilation, using XEs. Various causes of recompilation are discussed, including:

 

  • The schema has changed

  • Statistics changed

  • SET options have changed.

Avoiding unnecessary compilations can improve performance, methods discussed include: 

  • Don’t interleave DDL and DML statements

  • Avoid recompilation caused by statistics changes

  • Use the OPTIMIZE FOR query hint 

This chapter had some interesting discussions on recompilations and how to reduce them. The section on using XEs to identify what SQL causes the recompilations was a bit incomplete, since it doesn’t discuss how to set up the session via the GUI, this is specified in chapter 6, but there is no reference to it – another reason to have a separate detailed tools chapter.

Chapter 18 Query Design Analysis

This chapter contains design tips that you should certainly consider when creating SQL code. All the advice is sensible, and includes: 

  • Operate on small result sets (rows and columns)

  • Use indexes effectively

  • Avoid non-sargable search conditions

  • Avoid arithmetic operators on the WHERE clause column

  • Avoid functions on the WHERE clause column

  • Avoid optimizer hints (since they may vary with releases)

  • Use domain and referential integrity (helps the optimizer produce a better plan)

This chapter contains sound background advice that you should use whenever you create SQL code.

 

Chapter 19 Reduce Query Resource Use

This chapter contains tips to reduce resource usage and thus improve performance. Again, all the advice is sensible, and includes: 

  • Avoid resource-intensive queries

  • Avoid data type conversion

  • Use UNION ALL over UNION

  • Use indexes for aggregate and sort operations

  • Reduce the number of network round-trips

  • Execute multiple queries together

  • Use SET NOCOUNT

  • Reduce the transaction cost

  • Reduce logging overhead (wrap block of trans together)

  • Reduce lock overhead (db readonly, nolock, etc) 

Similar to the last chapter, this chapter contains sound background advice that you should consider whenever you create SQL code.

 

Chapter 20 Blocking and Blocked Processes

When SQL code runs it takes out locks, if other SQL code also wants the same data blocking can result, blocking reduces performance, but ensures data consistency. The chapter opens with a discussion of the fundamentals of blocking (e.g. locking, blocking, deadlock), and moves onto understanding blocking within the ACID (Atomic, Consistent, Isolated, Durable) framework.

The chapter moves onto explaining the various levels of locks within the lock object hierarchy e.g. Page (PAG), before examining lock modes e.g. Shared (S), and lock escalation. The effect of the various isolation levels are discussed in terms of locking and query performance.

The practical part of the chapter then considers how to capture blocking data (use DMVs to see current blocks, and XEs session for historical data). Recommendations for reducing blocking include: 

  • Optimize the queries executed by blocking and blocked SPIDs

  • Decrease the isolation level

  • Partition the contended data 

The chapter ends with a look at the automatic collection of blocking data (use SQL Agent to capture perfMon counters [Locks: Average Wait Time and Lock Wait Time]).

This chapter provides a very good discussion of many aspects of blocking, including why it is needed, how it affects performance, how it can be captured, and how it can be reduced. There is some useful code for notifying the DBA that blocking is occurring.

Chapter 21 Causes and Solutions for Deadlocks

Deadlocks are essentially blocking that can’t be resolved in the normal manner i.e. by waiting, so SQL Server chooses one of the pieces of SQL to rollback. The chapter opens with a discussion about what deadlocks are, and how they often result from lock escalation. Code is provided to illustrate how a deadlock can be handled in SQL code via retries. Various methods of collecting deadlock data are discussed, including: 

  • Use Extended Events (xml_deadlock_report) probably best

  • Set trace flag 1204

  • Use trace events 

A detailed discussion on how to interpret the deadlock data in provided (deadlock graph, and various XML sections). The chapter ends with a look at how deadlocks can be reduced, including: 

  • Access resources in the same physical order

  • Decrease the number of resources accessed (use new covering index)

 The chapter provides a good discussion of what deadlocks are, how they can be captured, analysed, and reduced. I suspect the deadlock retry code provided would be enhanced if it contained a WAITFOR command so the code retries after a given time period.

Chapter 22 Row-by-Row Processing

SQL is a set based language, where you say what you want to do instead of how to do it (the latter is how a procedural language like C# works). Cursors undertake row-by-row processing, that is typically slower than the corresponding set based processing.

The chapter opens with a look at cursor fundamentals, it then moves onto discussing cursor location (client or server), concurrency, and cursor types (Forward-only, Static, Keyset-driven, and Dynamic). Various recommendations are given, including: 

  • Use set-based SQL statements over T-SQL cursors

  • When using SQL Server cursors, use the FAST FORWARD cursor type

  • When using ADO.NET, use the DataReader object 

This chapter provides a good explanation of what cursor processing is, their types, and how they should be used. The various recommendations should help improve cursor performance. I would have liked a mention that often developers from procedural backgrounds (e.g. C) have a tendency to code in a row-by-row fashion – not knowing how to perform the equivalent set processing.

<ASIN:1430267437>

<ASIN:B01JC6P8MC>



Last Updated ( Tuesday, 30 October 2018 )