SQL Server 2022 Query Performance Tuning (Apress)
Article Index
SQL Server 2022 Query Performance Tuning (Apress)
Chapters 5 - 10
Chapters 11 - 15
Chapters 16 - 20
Chapters 21 - end; Conclusion

Banner

Chapter 16:​ Blocking and Blocked Processes

When SQL code runs it takes out locks, if other SQL code also wants the same data, blocking can result. The price for this data consistency is reduced concurrency/performance. The chapter opens with a discussion of the fundamentals of blocking (e.g. locking, blocking, deadlock), and moves onto understanding blocking within the ACID (Atomic, Consistent, Isolated, Durable) framework. 

The chapter moves on to explaining the various levels of locks within the lock object hierarchy e.g. Page (PAG), before examining lock modes e.g. Shared (S), and lock escalation. The effects of the various isolation levels are discussed in terms of locking and query performance. 

The practical part of the chapter then considers how to capture blocking data (use DMVs to see current blocks, and XEs session for historical data). Recommendations for reducing blocking include:  

  • Optimize the queries executed by blocking and blocked SPIDs

  • Decrease the isolation level

  • Partition the contended data   

This chapter provides a very good discussion of many aspects of blocking, including why it is needed, how it affects performance, how it can be captured, and how it can be reduced.  

Chapter 17:​ Causes and Solutions For Deadlocks

Deadlocks are essentially blocking that can’t be resolved in the normal manner (i.e. by waiting), so SQL Server chooses one of the pieces of SQL to rollback. The chapter opens with a discussion about what deadlocks are, and how they often result from lock escalation. Code is provided to illustrate how a deadlock can be handled in SQL code via retries. Various methods of collecting deadlock data are discussed, including:  

  • Use Extended Events (xml_deadlock_report), probably best

  • Set trace flag 1222

  • Use trace events

 

A detailed discussion on how to interpret the deadlock data is provided (deadlock graph, and various XML sections). The chapter ends with a look at how deadlocks can be reduced, including:

 

  • Access resources in the same physical order

  • Decrease the number of resources accessed (use a new covering index)  

The chapter provides a good discussion of what deadlocks are, how they can be captured, analysed, and reduced. I suspect the deadlock retry code provided would be enhanced if it contained a WAITFOR DELAY command, so the code retries after a given delay, when the other process has hopefully ended. 

Chapter 18:​ Row-by-Row Processing from Cursors and Other Causes

SQL is a set-based language, where you say what you want to do instead of how to do it (the latter is how a procedural language like C# works). Cursors undertake row-by-row processing, that is typically slower than the corresponding set-based processing. 

The chapter opens with a look at cursor fundamentals, it then moves onto discussing cursor location (client or server), concurrency, and cursor types (Forward-only, Static, Keyset-driven, and Dynamic) – and how each of these impacts performance. Various recommendations are given, including:  

  • Use set-based SQL statements over T-SQL cursors

  • When using SQL Server cursors, use the FAST FORWARD cursor type

  • When using ADO.NET, use the DataReader object   

This chapter provides a good explanation of what cursor processing is, their types, and how they should be used. The various recommendations should help improve cursor performance. I would have liked a mention that often developers from procedural backgrounds (e.g. C) tend to code in a row-by-row fashion – not knowing how to perform the equivalent set processing.

Chapter 19:​ Memory-Optimized OLTP Tables and Procedures

In-memory tables perform faster than disk-based table because locks and latches aren’t used and reading from physical disks is much slower than reading from memory. The chapter opens with a look at in-memory OLTP fundamentals, looking at the reason for its use, and how it is implemented (optimistic locking approach coupled with versioned data changes). Various limitations are discussed (e.g. need more memory). A useful step-by-step example of creating in-memory OLTP tables is provided. There’s a helpful discussion about hash indexes and getting the bucket count correct. Compiling stored procedures to native code results in fewer CPU instructions, which together with them being compiled (as opposed to interpreted SQL) results in faster performance – this is discussed with useful example code. 

There’s a brief look at the various included tools: Memory optimization advisor that suggests which tables might be migrated, and the Native Compilation advisor, which suggests which stored procedures might be migrated. 

This chapter provides a useful overview of what in-memory tables and natively compiled stored procedures are, how they work, and how they improve performance. There is some useful template code provided. 

Chapter 20:​ Graph Databases

Graph databases are not relational, they relate to many-to-many relationships between entities. They are useful, for example, for finding how many friends are linked to other friends. Graph databases involve nodes (tables), and edges (relationships). 

This is a new chapter, and I’m not sure if it was necessary. I don’t know anyone who’s making extensive use of Graph Databases, so I suspect it will be become a minor feature in future releases.

Unfortunately, the chapter spends 80% or more time discussing what a Graph database is, and how it is queried, and just 10% on techniques to improve performance.



Last Updated ( Tuesday, 14 February 2023 )