Pro T-SQL 2012 Programmer’s Guide
Article Index
Pro T-SQL 2012 Programmer’s Guide
Chapters 6 to 13
Chapters 14 on

 

 

Chapter 6 Triggers

This chapter provides a good review of the three types of trigger i.e. DML, DDL and Logon triggers. Many of the erstwhile uses for DML triggers (e.g. referential integrity, and auditing) are now obsolete. DML triggers are still appropriate for complex business rules, especially across tables. The inserted and deleted tables are discussed, as are recursive triggers, the UPDATE() and COLUMNS_UPDATED() functions, and triggers on views.

DDL triggers are used to identify changes to server/database structures, all the relevant information can be obtained via the EVENTDATA() function. Logon triggers are invoked after a successful logon but before the session is created, so it can’t be used for monitoring unsuccessful logons, but can be used for restricting connections.

There is some useful sample code for auditing data changes, but in reality this should be replaced by the purpose built Change Data Capture (CDC) functionality. I’ve always found triggers troublesome when performing analysis for a problem, because they are the last place I look, they are not obvious programming features.

Chapter 7 Encryption

This chapter discusses the various encryption features provided by SQL Server, ranging from the encryption of individual columns to the whole database. Encryption capabilities have tended to increase with each version of SQL Server. The hierarchy of encryption includes Windows Data Protection API, Service Master Key, Database Master Key, and certificates.

The author describes Transparent Data Encryption as a new SQL Server 2012 feature, whereas it was introduced in 2008. This chapter is a good introduction and overview to the many and varied encryption capabilities of SQL Server 2012.

Chapter 8 Common Table Expressions and Windowing Functions

CTEs make T-SQL code more readable, maintainable, and easier to write. CTEs make extensive use of derived tables. In the past temporary tables would often be used in place of CTEs. Useful example code is given for simple, multiple and recursive CTEs.

SQL Server 2012 introduced a wealth of Windowing functions, which can be used for partitioning results, applying numbering and ranking. Such functions include CUST_DIST, PERCENT_RANK, LAG, and LEAD. Some very useful sample code is given to get you started with the newer Windowing functions.

I’m not sure why the two subjects of this chapter (CTEs and Windowing Functions) were grouped together, there’s no obvious linkage. The details on CTEs should have been given in an earlier chapter, since they have been used in sample code previously.

Chapter 9 Data Types and Advanced Data Types

T-SQL is a strongly typed language, variables and columns need to have a valid data type, which determines what type of data can be held. Various simple data types are examined including: char, varchar, nvarchar, bit, int, big int, float, real, GUID, date and times. This is followed by some of the newer more complex data types including hierarchical, and spatial. Lastly the FileStream is examined.

A good overview with useful sample code is provided for most of the data types. Perhaps emphasis could have been put on using the correct data types. If you don’t specify the correct data type, then SQL Server will typically silently make the conversion for you, this can result in an index being ignored or being used incorrectly (e.g. scan instead of a seek), so can have serious performance implications.

Chapter 10 Full-Text Search

Full-Text search (FTS) enables you to issue queries against documents using SQL Server. The main keywords used are FREETEXT, CONTAINS, FREETEXTTABLE and CONTAINSTABLE. The architecture of FTS is discussed, and a step-by-step example to implementing it is given, together with sample code to retrieve data. Various DMVs are shown for analysis and trouble shooting.

Chapter 11 XML

The newer versions of SQL Server have increasingly tight integration with XML. Older methods like OPENXML which was awkward to use, have tended to be replaced with the XML data type and XQuery processing. Example code is provided for each of the methods that can be applied to the XML data type including: query(), value(), exist()s, modify(), and nodes(). To aid performance the XML data can have primary and secondary indexes (these are similar to clustered and non-clustered indexes for tables). While it is possible to process XML data in SQL Server, using the CLR can offer better performance, an example illustrating this is provided – Personally, I’ve known cases where the performance of XML processing improved by 500% by using the CLR instead of SQL Server.

Chapter 12 XQuery and XPath

This chapter expands on chapter 11 XML, with greater emphasis on XPath and XQuery processing. Specifically, there’s a detailed discussion and examples on XPath expression syntax, axis specifiers and node tests. The rest of the chapter is given over to XQuery discussion and examples. If you need to work with XML within SQL Server, this chapter is a useful starting point to learn more.

Chapter 13 Catalogue Views and Dynamic Management Views

Catalogue views provide metadata about SQL Server database structures e.g. details about all the columns in all the tables on a given database. DMVs provide details of the inner working of a SQL Server instance.

Some sample code is provided in the form of an index maintenance script. The idea behind the script is useful, indexes often become fragmented with time, degrading performance, so they are regularly rebuilt to remove the fragmentation. The script on p404 dbo.RebuildIndexes, uses the DMV sys.dm_db_index_physical_stats to create and execute an index rebuild script. So far so good.

The script then updates the statistics on the tables whose indexes are being rebuilt, and issues a recompile against any object that uses the table. BOTH OF THESE STEPS ARE WRONG, and should have been caught by the technical reviewers. If you rebuild an index, the statistics are automatically rebuilt with 100% sampling. Updating the statistics after an index rebuild is not necessary, if it is done the sample size is likely to be less than 100% – so your statistics are likely to be less representative. Also, if you update the statistics on an index (e.g. from an index rebuild), then any objects that use that index will automatically be recompiled on next use. So there is no need to recompile anything. Lastly, if an index needs its statistics updated, it does not make sense to apply the UPDATE STATISTICS command to the whole underlying table (which will update statistics on all the indexes on the table – even if they don’t need it), rather it should be applied to specific indexes.

Mistakes aside, this is an interesting and useful chapter.



Last Updated ( Monday, 30 September 2013 )