Page 1 of 3
Author: Kalen Delaney
Publisher: Red Gate Books
Audience: DBAs and developers
Reviewer: Ian Stirk
In-Memory OLTP is the major new feature in SQL Server 2014. In this slim volume a notable internals expert endeavours to explain it all.
Historically, memory has been expensive, so SQL Server was built with a consideration for disk-based storage. However things have changed, memory is increasingly cheap and systems with multiple CPUs are commonplace. In-Memory OLTP takes advantage of this technology shift.
SQL Server 2014 can be defined primarily as a performance-based release, with its main feature being In-Memory OLTP, so this book should be of great interest to DBAs and keen SQL developers. Specifically the book is “...focusing on details of memory-optimized tables and indexes, how the in-memory engine delivers transactional consistency (ACID compliance) without locking or latching, and the mechanics of its checkpointing, logging and garbage collection mechanisms.”
The author is a well-known SQL Server internals expert, widely known as the author/editor of the SQL Server Internals series of books and . my review of SQL Server 2012 Internals is also on I Programmer, here.
Below is a chapter-by-chapter exploration of the topics covered.
Chapter 1 What's Special About In-Memory OLTP?
In-Memory OLTP centres on in-memory tables and indexes, the lock and latch free concurrency model, and natively compiled stored procedures. These features are implemented transparently, and can improve performance significantly. The chapter expands on these central topics.
In-Memory tables and their indexes are held entirely in memory. The tables do not have pages or extents, instead they are sequential rows held together by indexes - you must define at least one index.
Natively compiled stored procedures contain fewer CPU instructions, and need no further compilation/interpretation to run, potentially making them significantly faster than the traditional interpreted stored procedures. Natively compiled stored procedures can only run against In-Memory tables, whereas traditional interpreted stored procedures can run against both disk-based tables and In-Memory tables via interop, during cross–container transactions.
In-Memory OLTP further improves performance by implementing a new Multi-Version Concurrency Control (MVCC) mechanism to facilitate optimistic concurrency. Here there are no locks or latches, instead multiple versions of rows can be created, and the correct rows chosen based on the timestamps/statuses contained within each row.
Indexes on In-Memory tables have a different structure compared with disk-based indexes. There must be at least one index present to give the table structure, and only one unique index can be created (a primary key can do this). Changes to indexes are never logged, since during recovery the indexes are recreated from the underlying In-Memory table. There are 2 types of indexes: Hash (best for looking up specific values) and Range (best for looking up a range of values).
The chapter ends with a look at comparable products from competitors. The main impression is only the Microsoft product provides the advantage of partial implementation (i.e. you can move only your critical tables/stored procedures to In-Memory OLTP). This area is rapidly changing so expect frequent developments.
This chapter provides an overview of what In-Memory OLTP is, its components and structures, and why it is important. To put In-Memory OLTP into context, comparisons are made with disk-based tables, and related products from competitors. This chapter provides a good overview of what to expect in the rest of the book, it already feels narrow in scope but with significant depth.
Throughout the book there are useful diagrams to support the discussions, good links with related chapters, and links to websites for further information.
Chapter 2 Creating and Accessing In-Memory OLTP Databases and Tables
The chapter opens with a look at the syntax for creating a database that supports In-Memory OLTP, firstly a MEMORY_OPTIMIZED_DATA filegroup needs to be created, to contain the checkpoint files needed for recovery. Checkpoint files consist of data files (containing all versions of inserted rows) and corresponding delta files (containing references to all versions of deleted rows) - these are needed because rows are never locked (hence use of versions), and never updated (instead rows are deleted and inserted). Log files for In-Memory tables are much more efficient than disk-based tables. It is possible to specify the collation (only BIN2 allowed) at the database level, but it is recommended to apply it at the table/index column level, else it makes the metadata (e.g. table name) case-sensitive.
The chapter continues with a look at the syntax for creating In-Memory tables, which is similar to disk-based tables but with extensions (specifically the MEMORY_OPTIMIZED clause) and limitations. In-Memory tables must specify SCHEMA_ONLY if the table is not durable, else SCHEMA_AND_DATA if it is durable (and will log changes and persist changes to checkpoint files). In-Memory tables must have at least one index, can have up to 8 indexes, including the primary key, clustered indexes are not permitted. The various constraints include: no foreign keys or check constraints, any identities must have seeds and increments of 1, and no DML triggers. There are restrictions on the data types that can be used, mostly simple types are supported, while CLR, XML and MAX data types are not.
The chapter examines accessing In-Memory tables via traditional interpreted T-SQL, which uses interop, this is slower than using natively compiled SQL, but usually faster than when used against comparable disk-based tables. Additionally, certain SQL is not permitted in this interpreted T-SQL, e.g. TRUNCATE and PAGLOCK. By comparison, natively compiled stored procedures are the fastest way to access In-Memory tables, but cannot access disk-based tables. Similarly, there are restrictions on the SQL that can be used within natively compiled stored procedures.
This chapter is primarily concerned with the syntax of the structures required to create In-Memory tables and indexes, together with their extensions and limitations. Step-by-step examples are provided. Tables are explained in terms of indexes, durability and restrictions (e.g. data types and constraints). There is helpful sample SQL provided to create databases, add filegroups for In-Memory tables to an existing database, and creating In-Memory tables and indexes. The potential problem of the BIN2 collation sequence is highlighted.