Page 1 of 2
Do you know how to measure the impact on performance of your SQL changes? Do you know which parts of your stored procedures (or batch of SQL statements) you need to optimise? Both of these questions are answered in this article.
Since this article uses Dynamic Management Views (DMVs), you need SQL Server version 2005 or above to use the described generic test harness.
Various changes can be made to improve the performance of your SQL queries. Such changes include adding an index, putting the underlying data on different physical devices, and using a different algorithm.
This article will attempt to quantify the effect of any changes on your SQL queries
In my previous article, Improve SQL performance – find your missing indexes, I showed how SQL Server’s DMVs can be used to identify your missing indexes.
These missing indexes can have a profound impact on the performance of your SQL queries. I ended the article by saying that when you change your SQL (or add indexes), it is prudent to test any changes to ensure they have a positive effect on performance.
Sometimes it can be difficult to quantify the effect of a change on performance. Often the change is too small to measure, or requires the source code to be instrumented (i.e. have debug statements inserted) before measurements can be taken. Additionally, it is not always clear which statements have been affected by the change.
The test harness described in this article can be used to quantify the effect of a proposed change on the performance of your SQL queries. Additionally, it highlights, for a batch of SQL or a stored procedure, detailed performance related metrics for each individual SQL statement executed. This latter point can be useful in identifying those sections of your stored procedures (or batch of SQL statements) where performance tuning effort should be focused.
Which sections of your SQL to improve?
When your SQL queries run slowly, the query plan is often used to determine where the underlying bottleneck resides, and where subsequent attention and refactoring should be focused. Often the query plan is very useful in identifying the problematic SQL statements, however, since the plan knows nothing about concurrency and which other queries are running, it can give misleading results.
On one occasion, the query plan informed me a delete statement was responsible for 0% of a stored procedure’s execution cost, however, the generic test harness described below, showed me the same delete statement was actually taking 85% of the stored procedure’s execution time. With this is mind, I believe the generic test harness described here will prove most useful.
To identify how long each SQL statement in a stored procedure or batch takes to execute, we use DMV snapshots.
Here we take an initial snapshot of the values of the relevant DMV counters, then run the SQL we want to investigate, and then take another DMV snapshot of the same counters.
Finally, we calculate the delta between the two snapshots, we do this because the DMV counters are accumulative.
This will provide us, for each SQL statement run between the DMV snapshots, with details of the total duration, time on the CPU, time being blocked or waiting, number of reads, number of writes, and the number of times each statement was executed.
Summing the total durations will give us a measure of the total duration of the SQL query under investigation. Perhaps more importantly, the delta will identify which SQL statements are taking a long time to execute, and these can be the target for any proposed improvements.
Details of the relevant DMVs involved in the snapshots are given below:
||Contains aggregated performance statistics for cached query plans
||DMF that returns the SQL text identified by a given plan_handle or sql_handle
The Generic Test Harness
Running the SQL query given below will produce metrics for each individual SQL statement run. You will need to replace the line “EXEC PutYourQueryHere” with the name of the stored procedure or batch of SQL statements, you want to test.
SET TRANSACTION ISOLATION LEVEL READ
-- Pre-work snapshot.
SELECT sql_handle, plan_handle
, total_clr_time, execution_count
-- ToDo: put your own SQL query here.
-- Post-work snapshot.
SELECT sql_handle, plan_handle
-- Calculate snapshot delta.
SELECT p2.total_elapsed_time -
, p2.total_worker_time -
AS [Time on CPU]
, (p2.total_elapsed_time -
ISNULL(p1.total_elapsed_time, 0)) -
AS [Time blocked]
, p2.total_logical_reads -
, p2.total_logical_writes -
, p2.total_clr_time -
AS [CLR time]
, p2.execution_count -
, SUBSTRING (
qt.text,p2.statement_start_offset/2 + 1,
((CASE WHEN p2.statement_end_offset = -1
THEN LEN(CONVERT(NVARCHAR(MAX), qt.text)) * 2
END - p2.statement_start_offset)/2) + 1)
AS [Individual Query]
, qt.text AS [Parent Query]
, DB_NAME(qt.dbid) AS DatabaseName
FROM #PreWorkQuerySnapShot p1
RIGHT OUTER JOIN #PostWorkQuerySnapShot p2
ON p2.sql_handle =
AND p2.plan_handle =
AND p2.statement_start_offset =
AND p2.statement_end_offset =
CROSS APPLY sys.dm_exec_sql_text(p2.sql_handle)
WHERE p2.execution_count !=
ORDER BY qt.text,
-- Tidy up.
DROP TABLE #PreWorkQuerySnapShot
DROP TABLE #PostWorkQuerySnapShot