Pro SQL Server Internals
Article Index
Pro SQL Server Internals
Chapters 5 -11
Chapters 12 -17
Chapters 18 - 24
Chapter 25 - 29
Chapters 30 - 35
Conclusion

Chapter 30 Designing a Backup Strategy

The chapter opens with the premise that disasters happen, after highlighting some of these disasters it discusses what can be done to minimise downtime and data loss.

Backups are the cornerstone of disaster recovery. The various backup types are discussed, these are: Full [everything], differential [extents changed since last full backup], and log backups. Both the log and database need to be backed up separately. Various backup options are discussed including checksum and compression. Various restore options are explained e.g. restore to a point-in-time.

In designing the backup strategy, the amount of data loss and downtime needs to be discussed realistically with the users – the smaller these values are, the more costly the recovery system will be. It should be noted that I/O throughput, network and backup file location will all impact the restore time. Backups should be validated regularly using DBCC CHECKDB.

Careful planning of filegroups enables partial backups and restores. It’s also possible to improve performance by placing read-only data on read-only filegroups. Backing up to Windows Azure provides additional backup strategy possibilities.

This chapter discusses the design of backup strategies to minimize downtime and data loss. There’s a useful grid to illustrate data loss by database recovery model. There’s a very good example of a realistic recovery sequence. There’s a helpful tip about IFI allowing faster restores.

I expected the chapter to mention page-level restores. In many ways the recovery process should dictate the backup strategy, this should have been made clear. When disaster strikes it can be a stressful time, I would urge you to create a small document, detailing the steps to take when given types of disasters occur. Whilst this chapter is interesting, I’m not sure it truly relates to internals.

Chapter 31 Designing a High Availability Strategy

High Availability (HA) aims to improve availability after problems occur. In many ways it relates to Disaster recovery (DR), however it aims to recover imperceptibly from hardware or software failure.

The chapter discusses various HA options, including:

  • SQL Server Failover Cluster (here the nodes share resources e.g. disks and databases)

  • Database mirroring (log records are used to update the mirror. In synchronous mode there is no data loss but has slower performance, asynchronous is the reverse)

  • AlwaysOn Availability Groups (from 2012, this is enhanced mirroring. The secondaries can be used for read only processing, taking load off the primary server)

  • Log shipping (this is an old and established HA method, very simple to use)

  • Replication (various types, can be fiddly to set up)

The chapter ends with a discussion on designing a HA strategy. Recovery time and data loss should be discussed with stakeholders, the less data loss/downtime the more expensive the recovery solution. Budget constraints are very important here. It is possible to mix the various HA options. Testing HA is very important. The strategy should be re-evaluated regularly.

This chapter provides an interesting overview of the various HA options available to SQL Server, together with how to design a HA strategy. The options are not detailed, but links are provided for further information. There’s a useful comparison grid of the various HA options. Whilst this chapter is interesting, I do wonder if it is needed, there is very little internals or performance information given. I suspect the diagnostic information supplied is too little to be of real use.

Chapter 32 In-Memory OLTP Internals

This chapter starts with an overview of Hekaton, the codename for the in-memory OLTP engine. The in-memory OLTP engine has 3 goals: optimize storage for memory, eliminate locks/latches, and use native code – all to improve performance significantly.

The structure of memory-optimized tables is discussed, together with their current limitations e.g. no FKs or triggers. The in-memory OLTP engine is fully integrated with the other SQL Server components e.g. backups, so no special processing is needed. The in-memory row-versioning system is discussed together with garbage collection. The in-memory model and how it relates to transactions and concurrency is explained, in essence blocking doesn’t occur and validation is done at commit time. This improved performance comes at a cost, you will need more memory to hold the in-memory tables,

This chapter provides a good overview of in-memory OLTP engine, its purpose, usage, limits and integration. There are good links to further information. There’s useful example code showing how in-memory tables behave differently to on-disk tables, especially for different transaction levels. Additionally there are suggestions for troubleshooting.

Chapter 33 In-Memory OLTP Programmability

Having looked at the in-memory structures, this chapter takes a look at native compilation. T-SQL often needs thousands of CPU instructions to run, using native compilation typically reduces this significantly. Under the covers, the SQL code is converted to C code, and compiled into a DLL.

These natively compiled stored procedures can only be used with in-memory tables, whereas normal T-QL requires slower interop. The natively compiled stored procedures are optimized for unknown values, and the statistics are not updated automatically. The code will be recompiled when update statistics is run, or you drop and recreate the stored procedure, or restart the server.

The chapter discusses some useful example code that creates a natively compiled stored procedure. The various T-SQL supported features are given. By default, execution statistics are not collected due to their performance impact, however they can be enabled via sys.sp_xtp_control_proc_exec_stats and sys.sp_xtp_control_query_exec_stats, but should only be used during troubleshooting.

Using in-memory OLTP has various drawbacks including: need to upgrade to 2014, learn the new technology, need to refactor and retest code. The latch/lock free design should prove very useful for OLTP systems, perhaps less so for OLAP. Use the AMR tool to investigating what objects to upgrade.

This was another interesting and detailed chapter, describing native compilation in detail, including some useful example code, and hints for troubleshooting. I was surprised the AMR tool wasn’t discussed in more detail, since this is likely to be the initial point of contact in the upgrade process. The AMR tool is described in error as ARM, a common error. This chapter is included under the columnstore section, it should have been included in the In-Memory OLTP Engine section.

Chapter 34 Introduction to Columnstore Indexes

Columnstore indexes were introduced in SQL Server 2012 to significantly improve scans and aggregates. They have a different format to the traditional row-based indexes, making heavy use of compression. In 2012 they were read-only, which limited their uptake, in 2014 they are updatable.

The chapters start with a brief overview of data warehouse systems, discussing the star and snowflake schemas and the dimension and fact tables. If only a few columns need aggregating, then a columnstore index offers better performance, since they are compressed, additionally the processor can take advantage of batch-mode execution model to reduce the load on the CPU.

The columnstore index’s internal structure is described next, in plenty of detail. Each column in the columnstore is stored separately in row groups, these are then encoded and compressed. Use the routines sys.column_store_segments and sys.column_store_dictionaries to view metadata.

The chapter then discusses a number of best practices, including:

  • reduce row size (more data on page, less I/O. especially good for fact tables)

  • give SQL Server as much information as possible so it can make better choices e.g. not null

  • maintain statistics (by default not created/updated after create columnstore index)

  • create missing column-level statistics

  • avoid string columns in fact tables (nice example), replace with id to dimension table

This was another wide-ranging chapter with plenty of supporting examples. The columnstore index internal structure is described in detail, and batch-mode execution model explained. Sample code is given to explain the columnstore index behaviour and performance. The limitations of columnstore indexes are given e.g. not all data types are supported. There are some great tips and best practices.

Chapter 35 Clustered Columnstore Indexes

Columnstore indexes described previously were not clustered, and real-only. In SQL Server 2014 it is possible to have updateable clustered column store indexes (CCI). It should be noted that 2014 includes additional batch-mode operators to further improve performance.

The chapter describes the internal structure of CCIs, together with their limitations e.g. no FKs. The internal structure is the same as that of non-clustered columnstore indexes (NCCI), but additionally they have a delete bitmap to indicate deleted rows, and a delta store for holding inserted rows.

There’s some interesting code showing the impact of batch size and data distribution during bulk inserts. Other sample code looks at the internal structure of the delta store and the delete bitmap, looking deeper into segments and row groups via sys.column_store_segments and sys.column_store_row_groups views. 2014 also has a data_compression=columnstore_archive index property which gives further compression, ideal for archiving.

The chapter then discussed CCI maintenance. There is an overhead in using the delta store and deleted bitmaps scan during SQL execution, additionally, fragmentation may occur with partially populated row groups. Example code is given to show the impact of a large delta store, large delete bitmap, and an excessive number of partially populated row groups. Index maintenance will fix all these problems, it removes deleted rows from the index and merges delta store and row group data, and all segments are created fully populated.

The chapter ends with a look at CCI design considerations, including:

  • remove strings from fact table and replace with ids, better for compression

  • may need to refactor queries to use batch-mode execution

  • may need regular index maintenance if lots of updates/deletes

  • table partitioning often a must, since can perform index maintenance in partition scope

  • for static read-only data a NCCI may be better choice

This chapter provided great overview of the purpose and internal structure of CCIs. There are plenty of tips and best practices for index maintenance for optimal performance.

<ASIN:1430259620>



Last Updated ( Tuesday, 01 May 2018 )