SQL Server 2017 Query Performance Tuning, 5th Ed
Article Index
SQL Server 2017 Query Performance Tuning, 5th Ed
Chapters 6 - 14
Chapters 15 - 22
Chapters 23 - 28, Conclusion


Chapter 23 Row-by-Row Processing

SQL is a set-based language, where you say what you want to do instead of how to do it (the latter is how a procedural language like C# works). Cursors undertake row-by-row processing, that is typically slower than the corresponding set-based processing.

The chapter opens with a look at cursor fundamentals, it then moves onto discussing cursor location (client or server), concurrency, and cursor types (Forward-only, Static, Keyset-driven, and Dynamic). Various recommendations are given, including: 

  • Use set-based SQL statements over T-SQL cursors

  • When using SQL Server cursors, use the FAST FORWARD cursor type

  • When using ADO.NET, use the DataReader object 

This chapter provides a good explanation of what cursor processing is, their types, and how they should be used. The various recommendations should help improve cursor performance.  I would have liked a mention that often developers from procedural backgrounds (e.g. C) tend to code in a row-by-row fashion – not knowing how to perform the equivalent set processing.

Chapter 24 Memory-Optimized OLTP Tables and Procedures

In-memory tables perform faster than disk-based table because locks and latches aren’t used and reading from physical disks is much slower than reading from memory.

The chapter opens with a look at in-memory OLTP fundamentals, looking at the reason for its use, and how it is implemented (optimistic locking approach coupled with versioned data changes). Various limitations are discussed (e.g. need more memory). A useful step-by-step example of creating in-memory OLTP tables is provided. There’s a helpful discussion about hash indexes and getting the bucket count correct. Compiling stored procedures to native code results in fewer CPU instructions, which together with them being compiled (as opposed to interpreted SQL) results in faster performance – this is discussed with useful example code.

There’s a brief look at the various included tools: Memory optimization advisor that suggests which tables might be migrated, and the Native Compilation advisor, which suggests which stored procedures might be migrated.

This chapter provides a useful overview of what in-memory tables and natively compiled stored procedures are, how they work, and how they improve performance. There is some useful template code provided.

I was surprised the chapter didn’t mention the Transformation Performance Analysis Overview report - to identify the tables and stored procedures that would benefit most from migration to In-Memory OLTP. The tools discussed in this chapter seem to be a subset of this.

The chapter says, “An in-memory table can have up to eight indexes created on it at one time”, however this limit was removed in SQL Server 2017. Although specified earlier in the chapter that non-enterprise editions of SQL Server now include the use of in-memory technology, under the recommendations section it says the Enterprise edition is needed. This chapter should have been edited more thoroughly. 

Chapter 25 Automated Tuning in Azure SQL Database and SQL Server

Automatically tuning databases potentially offers big gains for relatively little effort. With SQL Server 2017 (and Azure SQL Database) Microsoft has begun looking at automated tuning to improve the performance of your queries.

The chapter looks at: 

  • Automatic plan correction (based on plans/stats in Query Store)

  • Azure SQL Database automatic index management (a useful walkthrough is given)

  • Adaptive query processing (dynamically selecting the best plan options) 

This chapter provides some useful example code to examine automated tuning in SQL Server (much of the underlying mechanism is hidden). Tuning recommendations exposed via the DMV sys.dm_db_tuning_recommendations are explained.

Overall, this is a very interesting chapter, it’s certainly an ever-expanding area to keep an eye on for the future.

Chapter 26 Database Performance Testing

This chapter details how to capture, record, and replay data needed for database performance testing. The chapter starts with how to capture a load for replay (i.e. backup database, and run a server-side trace via SQL Profiler). The backup database is restored, any changes applied (e.g. new indexes), and the distributed replay tool used to replay the database load – during this time, another trace is recorded, this trace and the original trace can then be compared to determine if the change has had a positive impact.

Step-by-step instructions and screenshots are provided on how to set up the trace for the distributed replay tool. This trace definition can be saved for later reuse. The various distributed replay tool options are explained.

This chapter provides a good overview of how to capture, record, and replay data need for database performance testing. The step-by-step instructions for setting up the trace for the distributed replay tool are very helpful, including good use of screenshots. The author notes that Microsoft should update the replay tool to use XEs.

Chapter 27 Database Workload Optimization

This chapter takes many of the topics learned in the previous chapters and applies them to a sample workload, captured via an XE session. The workload is extracted from XEs and stored in a table, and the longest running queries identified.

The costliest queries are analysed using techniques given previously in the book, including: 

  • Statistics (dbcc show_statistics, stale?)

  • Fragmentation (sys.dm_db_index_physical_stats)

  • Internal behaviour of query

  • Exec plan (indexes? Join strategy? Constraints? Functions?)

  • Costly plan steps (focus effort)

  • Optimizing costliest query (new index? Check impact on workload)

  • Fix key lookups (use include) 

Any proposed changes should be tested with the whole workload, to ensure an improvement in one area doesn’t result in decreased workload performance. Lastly, there is always another ‘most costly’ query, this iterative process is then repeated until the performance is deemed ‘good enough’.

This chapter shows the steps and processes that could be undertaken to identify and improve a given workload. The use of the AdventureWorks sample database is instructive since it will allow readers to follow along. As expected, there are good links to other chapters. It may have been useful to say that when tables/indexes are rebuilt, their statistics are automatically built with a fullscan.

Chapter 28 SQL Server Optimization Checklist

This chapter summarises performance best practices in single place. It provides various checklists (together with explanations) including: 

Database design 

  • Use Constraints (e.g. PK/unique. helps optimizer produce better plan)

  • Index design best practices (e.g. include, filter, clustered, missing indexes etc)

  • Put tables into in-memory storage

  • Use columnstore indexes 

Configuration settings 

  • Memory configuration options (max server memory)

  • Cost threshold for parallelism (35s?)

  • Database compression (more CPU but typically faster IO) 

Database admin 

  • Keep the statistics up-to-date.

  • Maintain a minimum amount of index defragmentation 

Query design 

  • Avoid non-sargable search conditions

  • Avoid optimizer hints

  • Ensure there are no implicit data type conversions

  • Minimize logging overhead

  • Adopt best practices for database transactions 

In many ways this chapter summarises the whole book, and certainly justifies the cost of the book itself. The recommendations offered are sensible, but , as repeated often in this book, please ensure you test them on your own system to know they actually do improve performance.


This book covers a wide range of performance topics and techniques, with the central aim of improving the performance of your SQL workload. It is generally easy to read, with practical step-by-step walkthroughs, helpful links between chapters and to websites, and good use of screenshots. Typically, code is provided to back up the assertions made. The book should take your level of expertise from level 3 to level 8 (based on a 1 to 10 scale).

While this book covers most code-based performance topics, there were times when I wanted more detail. There was no mention of the more advanced performance tools like SQLDIAG or SQL Nexus. Sometimes the structure of the book felt awkward (e.g. perfMon is explained in the memory chapter but its example usage is given in a later chapter). Having a separate detailed tools chapter, that is repeatedly referenced, may have been better.

This book is very similar to the 2014 edition of the book, while the screenshots have been updated for 2017, the text is perhaps 70% the same or similar (2 new chapters have been added). Although written for SQL Server 2017, much is applicable to 2014 and earlier. If you want a good, wide ranging, SQL code optimization book, I can certainly recommend this book.

For more recommended SQL Server books see Pick Of The Shelf - SQL Server 

To keep up with our coverage of books for programmers, follow @bookwatchiprog on Twitter or subscribe to I Programmer's Books RSS feed for each day's new addition to Book Watch and for new reviews.


Introduction to Programming in Python

Author:  Robert Sedgewick, Kevin Wayne, and Robert Dondero
Publisher: Addison Wesley
Pages: 792
Print: 0134076435
Audience: Students expecting an academic approach 
Rating: 4
Reviewer:  Mike James

Python is often the language of choice for acad [ ... ]

The Object-Oriented Thought Process (5e)

Author: Matt Weisfeld
Publisher: Addison-Wesley
Pages: 240
ISBN: 978-0135181966
Print: 0135181968
Kindle: B07Q3SGD1S
Audience: Developers using C#, C++, Java
Rating: 3
Reviewer: Mike James

Getting inside the object-oriented thought process is a trick worth knowing. Does this book hack it?

More Reviews





Last Updated ( Saturday, 03 November 2018 )