Microsoft SQL Server 2012 Internals
Article Index
Microsoft SQL Server 2012 Internals
Chapters 4 - 6
Chapters 7 - 10
Chapter 11 - 13
Chapter 14 and Conclusion

 

Author: Kalen Delaney et al
Publisher: Microsoft Press
Pages: 982
ISBN: 978-0735658561
Audience: DBAs and SQL Developers
Rating: 4.7
Reviewer: Ian Stirk

 

Chapter 11 The Query Optimizer

The query optimizer examines various metrics and together with heuristics, creates a good enough query plan that can be executed. This chapter examines these metrics and heuristics, so you’ll be able to determine why a given query plan was created, and this should also help with troubleshooting. Unlike other database components, the optimizer exposes few features.

The chapter starts by providing an overview about how a query is executed (parsing, binding, optimization and execution). Creating an execution plan can be expensive, so they are cached for reuse. Optimization typically produces a tree structure, where each node of the tree is a different query operation, this logical tree structure is later converted into a physical plan. There are many potential query plans that can be produced, using metrics (e.g. index statistics) and heuristics, the optimizer aims to produce a good enough one (it would take too long to examine all the potential plans and select the optimal one).

The chapter looks at the framework used to create and compare the various potential plans, and their costs. The framework includes rules (subdivided into substitution, exploration, and implementation), and properties (information about the query tree). The Memo structure is used to store the optimization alternatives, and is optimized to prevent duplication of optimizations. The general implementation of the 40+operators is explored, in essence consisting of GetRow() and return row. Specifically Project, Sequence Project, semi-join, Apply, Spools, and Exchange (i.e. parallelism) are examined.

The chapter proceeds with a look at the optimizer phases, namely simplification, trivial plan, auto-statistics create/update, exploration/implementation, and conversion to an executable plan. In simplification, the query is rewritten and reordered, contradictions are removed, and the end result output is easier for the optimizer to work with. If the query is simple, with few options, a trivial plan is selected and no further optimizations examined (note auto parameterization may also occur here). Each operation (e.g. scan) has a cost associated with it, this cost is based on statistical information used to get an estimate of the number of rows. The optimizer determines which columns it needs statistics about, it creates them if they are absent or stale, else uses existing ones.

Having seen that up-to-date statistics is vital to providing input to the optimizer, and its ability to produce good query plans, the statistics object is examined in detail. The statistics object is a histogram that records the distribution and density of column values. The DBCC SHOW_STATISTICS command can be used to display statistics information, including the sampling percentage, when last updated, as well as the distribution/density information. String statistics (also called trie statistics) often contain many unique values, since there are only 200 buckets to store this range of data another more efficient method of storing string sampled is used.

The chapter provides a very interesting section on cardinality estimation, this is where each operator has an estimated number of rows associated with it, this is used by the optimizer to evaluate alternate operators – the included sample discussion provides a step-by-step explanation.

Sometimes the optimizer gets things wrong, with this in mind the chapter examines ways of coercing the optimizer to follow a given path, with the use of various plan hints. Tips on how to debug plan problems are also included.

This chapter was a very interesting read, probably because it contains hard-to-obtain information. There chapter contains a few simple English errors which should have been caught by the editors.

There’s some interesting example code for OPTION (FAST n) that shows using a nested loop instead of hash join, but the associated diagram only includes SELECT using fast n, it would have been better to show both i.e. it would show the nested loop join replacing the hast join, and the relative costs of each!  Note: both are given when discussing the filtered index example later.

 

Chapter 12 Plan caching and recompilation

Creating a query plan can be complex and expensive, to improve performance, query plans are cached, so when the query (or a similar one) is run again, time and resources can be saved.

The chapter starts with an examination of the plan cache, a non-fixed section of memory. Example code is provided to show the number of times plans have been reused, using the DMV sys.dm_exec_cached_plans (which has a usecount column), and the DMF sys.dm_exec_sql_text (which shows the SQL involved). Various ways of clearing the plan cache are given.

The chapter examines the mechanisms for caching, and code is provided to support the points made. The 4 caching mechanisms discussed are:

 

  • Ad hoc query caching – this requires an EXACT match (case, space, sarg value). Use “optimize for ad hoc workloads” option so all queries not cached unless used at least twice (prevents plan cache bloat)

  • Autoparameterization – i.e. simple parameterization, where SQL Server treats constant as params, so can reuse plans. The parameterized forced option can be used to encourage this

  • Prepared queries – i.e. sp_executesql, in essence this is halfway between ad hoc and using stored procedures

  • Stored procedures – also TVF etc. But there can be problems with parameter sniffing

 

Sometimes the cached plans are discarded and recompilation occurs. Reasons for this are discussed, including: changing schema (e.g. column added/dropped), use of sp_recompile, changing various SET environment options, stale statistics, and use of temporary tables.

The chapter next examines the internals of the plan cache, which could be useful for troubleshooting. The plan cache is stored in 4 areas of memory know as cache stores, namely Object Plans, SQL Plans, Bound Trees, and Extended Stored Procedures. Code is provided to examine the cache stores. Plan cache metadata together with associated DMVs can obtain some very useful diagnostic information (e.g. what SQL is running now). Cache size management is briefly discussed, including when SQL Server is under memory pressure the cached plans can be removed.

There is an interesting section on handling compile/recompile problems, solutions include: don’t change the SET environment options, disable auto update statistics, remove WITH RECOMPILE, and use KEEPFIXED_PLAN.

The final section concerns the use of optimization hints and plan guides. The hints discussed include: OPTIMIZE FOR, KEEPFIXED PLAN, PARAMETERIZATION and USE PLAN. The purpose of plan guides is discussed, and example usage code provided.

This chapter is a good compliment to the previous two (query execution and query optimization), it contains details about the types of plans saved, and when they might be reused (or recreated!).

The chapter’s opening SQL shows you how to get the usecount to determine how often a plan has been reused, extending this SQL to use ORDER BY usecount (with and without a DESC clause) would have been a bit more useful. The same SQL has “AND [text] NOT LIKE '%dm_exec_cached_plans%'”, which is not explained (it is explained later in the section on ad hoc caching) - its purpose is to exclude itself from any results, it might have been better to have used the more common and smarter OPTION (RECOMPILE) for this purpose.

In the section on caching mechanisms the term autoparameterization is used, later the term “simple parameterization” is used as a heading when discussing the detail. It doesn’t say they are the same thing! The author/editor should have noticed this inconsistency.

Chapter 13 Transactions and concurrency

Concurrency is the ability to run multiple pieces of SQL at the same time without them interfering with each other. Since shared data might be involved, there is a trade-off between concurrency and data consistency.

The chapter opens with a look at concurrency models. In pessimistic concurrency, lots of conflicts are expected, so locks are taken out to prevent other processes accessing the data, until the owning process has finished with it (i.e. writers block readers). In optimistic concurrency, few conflicts are expected, older versions of rows can be held so other processes can use these (i.e. writers only block other writers).

The chapter proceeds with a discussion of transactions, what they are and why they are important. This leads naturally to the ACID properties of transactions i.e. Atomic, Consistent, Isolated, and Durable. The ISOLATION property can be used to relax the contention between processes, but this can lead to data inconsistencies. These inconsistencies include lost updates, dirty reads, nonrepeatable reads and phantoms. The ISOLATION levels read uncommitted, read committed (locking), read committed (snapshot), repeatable read, snapshot, and serializable are examined in relation to the inconsistencies they can cause.

Locking is at the heart of transaction processing. Reads acquire shared locks, and writes acquire exclusive locks. SQL Server acquires and releases locks automatically, as well as resolving deadlocks and managing lock incompatibilities. The DMV sys.dm_tran_locks is used to view locking. In addition to locks, latches provide an internal lightweight protection mechanism.

The theme of locking continues with a look at lock types including: row, page, index, range of keys, extent, partition, and table. The lock modes examined include: shared (read), exclusive (modification), update (select data to then update), intent (read but intend to modify), schema stability, schema modification, and bulk update (TABLOCK). There’s a useful illustration of lock granularity together with code to support it. Useful SQL is provided to view locks, together with a useful lock compatibility grid.

The chapter continues with a look at the internals of the locking mechanism, where the lock manager maintains a lock hash table. Locks on resources (e.g. row locks) may be escalated (e.g. to a table lock) to improve resource usage – since maintaining locks can be memory and CPU intensive. The purpose and causes of both lock escalation and deadlocks are examined.

Optimistic concurrency uses row versioning to keep previous versions of rows available, so readers don’t block writers. These rows use tempdb for the version store, this can grow significantly, and so monitoring space becomes important. The section explains in-depth how row versioning works, and useful sample code is provided to examine the rows. The section ends with a look at what to consider when choosing a concurrency model.

The last part of the chapter examines how locking can be controlled, using various locking hints (e.g. HOLDLOCK, and PAGLOCK). The LOCK_TIMEOUT setting can also be used to control how long SQL wait for a lock.

This chapter’s introduction contains information that is readily available elsewhere, but here provides context for later discussions. It contains several useful grids including the isolation levels and the problems they solve, and lock compatibility. Some useful code for viewing locks and describing how row versioning works is given.



Last Updated ( Wednesday, 29 January 2014 )