|Introducing Microsoft SQL Server 2014|
Page 2 of 3
Chapter 2 In-Memory OLTP investments
The chapter opens with the lure of performance gains of up to x30 may be possible with the new in-memory OLTP functionality, together with the use of natively compiled stored procedures.
A bit of history then follows: increasingly larger amounts of data need to be processed faster, but, CPU speeds seem to have reached a plateau, whilst memory costs have fallen dramatically. With this in mind, Microsoft focused on moving processing from disk to memory, resulting in the in-memory OLTP functionality.
Four pillars of architecture discuss the aims of in-memory OLTP, namely:
The in-memory OLTP terminology and concepts are discussed next. Traditional disk based tables are organised as 8K pages, whereas memory optimised tables follow a new structure, reside in memory, with another copy on disk for durability. It is possible to improve performance further if the disk copy is not required (e.g. staging tables are often temporary). Other terms discussed include native compilation, interop, and cross-container transactions.
Hardware and software requirements for memory-optimized tables are discussed. In summary, the 64-bit version of SQL Server 2014 Enterprise edition is required. Consideration will need to be given to the additional memory needed, together with the max/min amount of memory assigned to SQL Server. The CPU needs to support the cmpxchg16b instruction.
There follows a small section on in-memory OLTP use cases, but this is very brief, and gives simple general scenarios rather than any specific companies that have created real world solutions. Such use cases exist, and should have been included to demonstrate the improvements made.
An interesting FAQ follows to answer some of the myths that have grown up around in-memory OLTP, these include:
The next section discusses tools that help in the migration to in-memory OLTP. The main tool is the Analysis Migrate and Report (AMR) tool, which identifies the most important tables to migrate to in-memory. The section briefly discusses the steps involved in running the tool – however the steps are too brief to be of real use. Another tool also too briefly discussed is the Memory Optimization Advisor, this checks if the table is suitable to migrate.
Various tools can be used to analyse in-memory OLTP behaviour after table migration, these include Dynamic Management Views (DMVs), Extended Events, and instrumentation – again no usable detail (or code) is supplied in the book.
The final section purports to look at using in-memory OLTP, however it’s really a brief collection of example SQL syntax for enabling in-memory OLTP in a database, creating memory-optimized tables and compiled stored procedures. There is some useful example SQL that shows the improvement in performance after a table has been migrated to in-memory and a compiled stored procedure is used.
There is an appendix that contains the complete (BOL-like) SQL syntax for: memory-optimized tables, and natively compiled stored procedures.
This is another very interesting and useful chapter. It explains briefly the major features of in-memory OLTP, together with details of how to identify the most relevant tables to migrate.
The AMR tool could have been described in more detail, for example its use in identifying the top candidate stored procedures to migrate was not discussed. Additionally no mention was made of the Native Compilation advisor which identifies SQL that might need amending before it can be optimised. Luckily, such detail is provided in the article “SQL Server 2014’s Analysis, Migrate, and Report Tool” in April’s SQL Server Pro magazine, see my overview.
Chapter 3 High-availability, hybrid-cloud, and backup enhancements
The chapter opens with an overview of the major High Availability (HA) and Disaster Recovery (DR) enhancements. The maximum number of secondary replicas has increased from 4 to 8, and can now function even if the network connection with the primary is broken. The new Azure Replica wizard makes using the cloud to store a replica much easier than the manual steps in SQL Server 2012. Having a replica in the cloud is useful for DR, especially if you don’t have the hardware or your other secondaries are in close proximity to the primary (e.g. in the same data centre).
There have also been enhancements to AlwaysOn Failover Cluster Instances (FCI), including support for cluster shared volumes, which reduce the number of LUNs, so more FCIs can be hosted on Windows Server Failover Cluster (WSFC). Various new and enhanced DMVs can be used to monitor the FCIs.
Various advantages of using SQL Server 2014 as a platform for hybrid cloud are discussed, including:
Even with HA, DR, and hybrid cloud solutions there’s still a need to have a robust backup and restore strategy. Backup and restore enhancements include:
This chapter provides a good overview of the new features relating to HA, DR, use of the hybrid cloud, and backups. The chapter has some useful diagrams that assist in understanding the text. Although high-level deployment steps are provided, many are too brief to be of real use other than for general guidance. Again some very valid arguments are made for including the cloud as part of your solutions. The benefits of storing backups in the cloud look especially useful.
|Last Updated ( Sunday, 10 August 2014 )|