Pro SQL Server Internals
Article Index
Pro SQL Server Internals
Chapters 5 -11
Chapters 12 -17
Chapters 18 - 24
Chapter 25 - 29
Chapters 30 - 35
Conclusion

 

Chapter 25 Query Optimization and Execution

This chapter looks at the little documented query optimizer. The chapter opens with a discussion of the steps of the query life cycle, the main steps are: parse (validate syntax, transformation), bind (bind objects, load metadata e.g. not null), optimize (execution plan created), and execute.

Query optimization involves: simplification, trivial plan, stats update, cost-based optimization (stage 0, 1, 2), leading to the generation of the execution plan. Some limited information about the optimization process can be obtained via the DMV sys.dm_exec_query_optimizer_info.

Query execution executes the execution plan, which is a tree like structure, containing operators (iterators). There is a row-based execution model, and SQL Server 2012 added a batch-mode execution model which was further enhanced in 2014.

Various operators are discussed in detail, these include:

  • joins: nested loop [best for small inputs], merge [best for sorted], hash [best large unsorted]

  • aggregates: perform calculations on set of data and return single value e.g. MIN

  • spools: internal in-memory or on-disk caches/temp tables. Cached data used many times

  • parallelism: can reduce query time, has cost – overhead of managing threads

The chapter then discusses query and table hints. While the optimizer generally produces a good plan, it some cases you can tune plan with hints, these should be used as a last resort, and re-evaluated regularly. The following hints are described:

  • index query (most common, name better than number)

  • force order (force the join order)

  • loop, merge, hash join (force the join type)

  • forceseek/forcescan (force the index access mechanism)

  • noexpand/expand (don’t/expand the indexes view)

  • fast n (get first n rows fast)

 

There are some useful optimization tips including:

  • for parameter sniffing, ‘optimizer for’ or a statement-level recompile is often better than using an index hint

  • one legitimate use-case for using an INDEX hint is forcing SQL Server to use one of the composite indexes in those cases where correct cardinality estimation is impossible

  • you can often remove spools (expensive in I/O) by creating appropriate indexes

  • use SQL Sentry ‘Plan Explorer’ to provide additional plan information

This is a very wide ranging chapter, about a little document but very important component. Excellent examples are used to support the assertions made, and there are good links to other chapters and websites. I would have liked the author to have explicitly stated that hints are not hints, but rather they force the optimizer down a given path.

 

Chapter 26 Plan Caching

The optimization process can be resource intensive, causing a significant load on busy systems. This can be reduced by reusing cached execution plans.

The chapter starts with an overview of plan caching. The cached plans are stored in memory, so reduces the buffer pool memory, this can result in data being reread from the physical disks, leading to slower queries. The plans can get recompiled for various reasons, including schema changes and stale statistics. Changing various SET statements will cause multiple copies of plans to be cached.

Plans are created based on the parameters supplied when they’re first run (i.e. parameter sniffing occurs). If the parameters supplied are atypical, then subsequent runs can give poor performance. The chapter discusses various ways of overcoming this problem.

Ad-hoc queries are rarely reused, and their plans can collectively can take up a lot of memory. Enabling the ‘optimize for ad-hoc workloads’ setting ensures that plans are only cached if they are used at least twice, thus reducing the pressure on memory.

Plan guides, auto-parameterization, and plan cache internals are all discussed with plenty of example code. The plan cache is examined using the DMVs sys.dm_exec_cached _plans and sys.dm_exec_plan_attributes.

This chapter provides a good overview of plan caching, parameter sniffing and plan reuse. Plan problems and their solutions are given. It is suggested that ‘optimize for ad-hoc workloads’ is enabled (disabled by default). There’s some very useful code that shows the currently running SQL.

Chapter 27 System Troubleshooting

The chapter opens with a salient reminder that performance problems need to be viewed within the context of the whole system. Items examined include:

  • Hardware – SQL Server is I/O intensive, ensure partition alignment is correct

  • OS – aim to use 64-bit, allowing more memory to be addressed. Ensure virus checker omits any database files. Ensure resources are not over allocated

  • Database options – don’t use auto-shrink or autoclose

  • Slow individual queries

The chapter then discusses traditional wait stats analysis i.e. reason why SQL is waiting to execute, and explains waits relating to I/O, memory, CPU, parallelism, locking, and tempdb contention. In each case, potential solutions to reducing the wait are given.

There’s an interesting section on what to do when SQL Server is not responding, this includes: ensure infrastructure is ok, check the SQL Server error log, and using DAC. Additionally there’s useful code that identifies runaway queries.

This chapter is a must for troubleshooting professionals. It contains plenty of tips to identify problems together with their solution. There’s a reference to Tom Davidson’s excellent waits and queue article. There’s a great tip about changing the ‘cost threshold for parallelism’ on OLTP systems instead of MAXDOP. There’s a very useful grid of common problems and their solutions.

I was a bit surprised the correlation between perfMon counters and wait stats wasn’t discussed. Also, no mention was made of changing the windows power plan from balance (the default) to high-performance. Lastly, there was no mention of sp_server_diagnostics, a very useful tool.

 

Chapter 28 Extended Events

This chapter discusses Extended Events (XE), a scalable, lightweight, and flexible performance monitoring and troubleshooting solution from SQL Server 2008. Its aim is to replace SQL Trace which has been deprecated. XE in 2008 was relatively limited, and it did not cover all the SQL Trace events, however this was corrected with SQL Server 2012. Additionally, since 2012, a supplied user interface has made creating XE sessions much easier.

The various components of XE are discussed (typically version specific), including:

  • packages – container for XE objects

  • events – e.g. SQL stmt completion, deadlock etc

  • predicates – defines when event fired e.g. cpu_time of sql_statement_completion > 5000

  • actions – collect extra info with event e.g. session_id, client_app_name

  • types and maps – e.g. wait_types, 12 LCK_M_BU

  • targets – where collected data stored, used for later analysis

There’s a useful example of creating an XE session to capture tempdb spills using hash_warning and sort_warning events. The section then looks at how to monitor live steam of event data or the collected data using the DMVs sys.dm_xe_sessions and sys.dm_xe_session_targets.

The very useful system_health and alwaysOn_health XE sessions are discussed. system_health is created and running on all systems, and contains information relating to errors, excessive waits/locks, deadlocks. alwaysOn_health collects information about errors and failovers.

This chapter contains a good discussion of the purpose and usage of XE. There’s some useful XE code that captures expensive queries that have >5 seconds of CPU usage or more than 10,000 logical reads or writes, this code is easily adaptable to your own monitoring needs.

Chapter 29 Transaction Log Internals

This chapter examines the purpose of the transaction log, its internal structure, problems with transaction log growth and best practices. The transaction log is needed for recovery. When SQL Server restarts, it runs a recovery process in each database, this rolls back any uncommitted transactions and reapplies any committed transactions. The transaction log guarantees data consistency. Supplied example code illustrates these concepts.

SQL Server 2014 introduced the concept of delayed durability. While commits are synchronous by default, this option allows asynchronous delayed commits – this can give improved performance at the cost of a potential lose of data. The data is committed when the log buffer full, a checkpoint issued, sp_flush_log called, or a fully durable transaction commits. It is important to consider the impact delayed durability has on other SQL Server components including backups, and various high availability (HA) options - these components only see committed transactions!

Although the transaction log is typically seen as a single sequential file, internally it consists of many virtual log files (VLFs). A VLF is active if it contains at least one active transaction. Truncation marks part of the log as inactive so it can be reused. VLFs can be examined via DBCC LOGINFO.

The chapter discusses the database recovery models, since this affect transaction log behaviour. SIMPLE mode means the log doesn’t need to be backed up, the log only exists to provide rollback capabilities. BULK_LOGGED and FULL mode do require a log backup to truncate the log.

The problems of excessive transaction log growth are discussed, together with their solutions. Looking at the log_reuse_wait_desc column of sys.databases will show the reason why the log can’t be truncated.

In the log management section various best practices are discussed, these include:

  • better to manage log growth rather than auto grow

  • watch number of VLFs

  • auto grow is ok in emergency

  • OLTP needs fast disks

  • log is sequential, data is random, best to place on separate disks

This chapter examined the purpose of the transaction log, its internal structure, transaction log growth and best practices. There’s a useful overview of 2014’s delayed durability option and its consequences for related SQL Server components. There’s a good example of the effect of data modifications on the workings of the log. There’s some very useful code that identifies the 5 oldest transactions. There’s a very useful list of dos and don’ts when you get log full error.

<ASIN:1430259620>



Last Updated ( Tuesday, 01 May 2018 )