Query Store for SQL Server 2019 (Apress)
Article Index
Query Store for SQL Server 2019 (Apress)
Chapters 4 to 7
Chapters 8 - 10, Conclusion



Chapter 4:​ Standard Query Store Reports 

Query Store contains some very useful reports, detailed by time period, for quickly identifying problems and areas worth investigating further. This chapter discusses the following reports:


  • Regressed Queries Report (which queries are now performing relatively slowly)

  • Overall Resource Consumption Report (by duration, execution count, CPU time, logical reads)

  • Top Resource Consuming Queries Report (most expensive queries in last hour)

  • Queries with Forced Plan Report (useful to review forced plans in case no longer needed)

  • Queries with High Variation Report (often indicate parameterization problems [parameter sniffing])

  • Query Wait Statistics Report (wait stats for a given time period, useful in troubleshooting)

  • Tracked Queries Report (specific queries you want to keep track of)


For each report, its purpose and usage is outlined, with helpful screenshots. Each report’s various options and menu items are explained (e.g. the Top Resource Consuming Queries Report can have various measures of ‘expensive’, including: duration, CPU time, logical reads etc).

These reports provide a great deal of useful information that was previously only readily available in third party products (e.g. SQL Sentry), it does make me wonder if these products will now need to improve significantly to retain their market share.

This was a very useful chapter, explaining Query Store’s key reports, what they are and how to use them. These are sure to help you track down and correct system problems. Again, some bad use of English: “For example, if you were to execute a query looking for every living in Montana (MO)…”

Chapter 5:​ Query Store Catalog Views 

The reports in the previous chapter get their data from Query Store’s underlying catalog views. This chapter discusses these, namely:


  • sys.​database_​query_​store_​options (how Query Store is set up)

  • sys.​query_​context_​settings (context setting of queries e.g. QUOTED_IDENTIFIER)

  • sys.​query_​store_​plan (plans associated with queries)

  • sys.​query_​store_​query (metrics for executed queries e.g. last_execution_time)

  • sys.​query_​store_​query_​text (SQL text of the queries)

  • sys.​query_​store_​wait_​stats (wait stats associated with the queries)

  • sys.​query_​store_​runtime_​stats (runtime stats for queries, aggregated by time period)

  • sys.​query_​store_​runtime_​stats_​interval (start and end time of each time period)


In each case, SQL is provided to get the underlying data from the catalog view for all the databases on the instance. Each of the columns in each catalog view is described. Some useful SQL is provided to identify wait stats for a given object (e.g. stored procedure) – perhaps this could have been extended to identify the queries with the most waits (e.g. of a given type) etc.

Most of the information in this chapter is reference information, and readily available in Microsoft’s documentation – I’m not sure if this chapter adds much. It would have been useful to have provided more SQL that joined the various catalog views together, to identify ‘interesting’ things (e.g. during a given time period, what queries used the most IO, CPU, waits etc), one such query is provided in the next chapter…

Chapter 6:​ Query Store Use Cases 

This chapter outlines various uses for Query Store. It can be used to identify queries that are using more resources or running slower than ‘usual’. To identify what is usual, you need create a baseline, something that can be compared against. The chapter walks through how to create a baseline (clear Query Store, run your queries, and view the various reports e.g. Overall Resource Consumption Report). Some useful SQL for obtaining the queries with the longest cumulative duration is provided. Similarly, another workload can be run, and compared with this baseline – any difference can be highlighted, and investigated further.

As an adjunct to this, when users say last night’s run was slow, you can look at the selected time period and compare it with a baseline you have created, to determine the problem.  

Next, the authors look at how Query Store can be used to identify regressed queries, and specifically how query plans can be compared. There’s a brief section on identifying Top Consuming Resource Queries, which can be sorted by many attributes (e.g. duration, reads, writes).

Another use of Query Store is to identify problems after a SQL Server upgrade (or other upgrades e.g. storage). In particular the Cardinality Estimator (CE) is discussed, together with its impact, and how you might want to reset it to the pre-2014 CE at the query or database level, if there are problems.

It is also possible to identify related ad-hoc queries using the Query Store, and useful SQL is provided for this. This enables you to replace these related ad-hoc queries with consolidated queries, thus reducing the number of query plans etc. 

This was another useful and interesting chapter, providing practical real-life example uses. The authors should have suggested here that another great use of Query Store is as a source of input for the Database Engine Tuning Advisor (DTA), helping you determine your optimal data structures, based on your queries. Again there was a degree of duplication (e.g. metrics for top consuming queries are given twice in this chapter).

Throughout the book, there are a few niggling errors in the use of English which should have been picked up by an editor. This chapter seems to have more than most, including:  

  • “In theory, you would run your workload to make any necessary changes to your environment;” (this doesn’t make sense)

  • “…and without any tracing turned you would have no idea what had run.” (missing word: turned on)

  • “Then we discussed who to view what happened last night…” (should be how instead of who)

  • “Finally, we discussed who to use Query Store to improve your ad-hoc workloads.” (should be how instead of who) 

Chapter 7:​ Forcing Plans 

When changes occur (e.g. stats updated), a new query plan is created, often based on the parameters used on its first run. After several changes, there may be several query plans. Sometimes, some of the plans whilst good enough for their first run, on subsequent runs can be slow. It is possible to force a previous plan to be used for the query. 

The chapter opens with a look at what causes poorly performing query plans. Often this is due to parameter sniffing, where the query plan is created based on the distribution and density of values for its initial parameters. Subsequent usage, with different parameters can be slow.

Next, the chapter looks at how to identify poor execution plans. Perhaps the easiest way is to look at the Regressed Queries Report, however this might not contain your query. You can also inspect the query plans for signs of problems, including: 

  • Warnings and errors

  • Fat pipes (large data transfers)

  • Scans (not always a problem)

  • Extra unexpected operators (e.g. sort) 

There’s a useful section on comparing the estimate and actual query plans. There’s a useful Query Store tool to compare query plans, and highlight any differences – this can lead you to prefer one plan over another.

Having identified a preferred query plan, the chapter shows how you can force (and unforce) the optimizer to use your preferred query plan – this is shown via the Query Store reports, and additionally using SQL.

Next, we look at how to identify the plans that have been forced, this includes the Forced Plan Query Report, and using SQL. I was a bit disappointed with the latter, the authors should have included in the code the SQL text and the execution plan of the underlying query – this would have provided very useful code to the reader (and got them more familiar with the Query Store catalog views). Perhaps this could be extended further to identify forced plan usage queries that are currently running.

This was another useful chapter, on how to correct a relatively common problem. There’s a useful reference to Grant Fritchey’s “Execution Plans” book. Forcing plans can be useful for solving the immediate problem, but like the use of hints, their use should be limited, and revisited often. The author should have pointed out some other solutions to queries suddenly running slower, often due to parameters sniffing (e.g. cache parameters, optimize for general, recompile etc). Again, I feel underwhelmed by some of the obvious statements: “Clicking on the Yes button will immediately force the plan. Clicking on the No button will of course cancel the process.” (My italics).


Last Updated ( Wednesday, 14 July 2021 )