This month's issue of SQL Server Pro magazine takes a look at SQL Server 2014’s revolutionary performance technology, the In-Memory OLTP Engine.
Last September's issue of SQL Server Pro Magazine had us Looking Forward to SQL Server 2014 including an FAQ article about its In-Memory OLTP Engine. Now the cover story in the February 2014 issue takes a more detailed look at this feature which is expected to improve the performance of your queries 10-fold or better..
As usual, in this overview of the online subscription-based monthly magazine, the items in bold below correspond to the name of the article.
Rev Up Application Performance with In-Memory OLTP starts with an overview of the architecture. Relevant tables are held in memory, and accessed in a lock and latch free manner. A new optimistic multi-version concurrency model is used, so when a row in the shared buffer is changed, a new version of row is created and timestamped. The engine validates any changed rows before committing them. This optimistic approach leaves unwanted rows in memory, these are periodically freed via a garbage collector. Everything is very fast because it’s all done in memory. Additionally, stored procedures, previously interpreted, can now be compiled to native code, again giving faster performance.
The article proceeds with a look at requirements, these include a x64 server that supports the cmpxchg16b instruction (which should be most modern processors), a large amount of memory (Microsoft recommends double the comparable on-disk storage space of the table/indexes), and Windows Server 2012 R2, Windows Server 2012, or Windows Server 2008 R2 SP2. Lastly you’ll need the Enterprise edition (or equivalent) of SQL Server 2014. Various limitations follow, these include certain (potentially larger) datatypes are not permitted e.g. VARCHAR(MAX), and no database mirroring, database snapshots, or triggers are allowed.
Luckily, there is an Analysis, Migrate and Report (AMR) tool to help you discover tables and stored procedures that might benefit from being in-memory. Part of this tool involves setting up the Management Data Warehouse (MDW) to collect metrics before and after any proposed changes.
The article finishes with a look at the T-SQL necessary to: create an in-memory OLTP database, add a memory-optimized file group to an existing database, create a simple memory-optimized table, and create a memory-optimized stored procedure.
I suspect this will be a key initial article for many SQL Server professionals. It’s wide-ranging, covering the most salient aspects of the In-Memory OLTP engine, together with useful example code and links to further information.
The theme of innovation continues with this month’s editorial Hot Database Industry Trends, with the big trends over last 12 months being:
In-memory OLTP engine – expected to give a 10-fold performance improvement
Solid State Disks (SSDs) – now more affordable and having an enterprise-level capacity.
Software Defined Networks (SDN) – basically virtualization for networks, giving greater flexibility and faster deployments
Cloud-based products (including backups and disaster recovery)
There’s an interesting collection of technologies covered, and it’s always worthwhile looking at what’s coming in the near-future - to the technologies you’ll be using tomorrow.
If you enjoy puzzles, then you’ll be interested in Identifying a Subsequence in a Sequence. The puzzle involves finding a subsequence within a sequence, specifically finding the subsequence 1, 7, 5, 9 within the sequence 1, 1, 7, 5, 9, 1, 7, 1, 7, 5, 9. The subsequence can be found in positions 2 to 5 and 8 to 11 in the original sequence. The first article provides the iterative solutions, namely using a recursive query, using a loop, and using the Divide and Conquer Halloween approach while the next one will provide set-based solutions. Both articles are already online at the SQL Server Pro site and available to all readers.
The Database Corruption series continues with Backups. Since backups have to be taken anyway, it makes sense to include options to help determine if the backup is good. Specifically, CHECKSUM can be added to the BACKUP command, and VERIFY_ONLY can be added to the RESTORE command. As the author correctly identifies, while these options increase you confidence that a backup is viable, there is no substitute for performing DBCC CHECKDB on a regular basis. I must admit I’m really enjoying this clear and concise series of corruption articles.
When troubleshooting, logs are essential - where do you go to get error or diagnostic information? SQL Server Log Files Update briefly discusses the different types of logs, their location, and how to read them. Log files discussed are:
Windows Event Log – not just SQL Server messages, also SSIS, etc
SQL Server Error Log – the most important SQL log file
SQL Server Agent Error Log – contains messages from SQL Server’s job scheduling subsystem
SQL Server Profiler Logs – contains trace information
SQL Server Setup Log – contains SQL Server setup information
While introductory, this is a much needed checklist of where to look when you get problems (in additional to any application–based logs). It would have been useful to also include the Default Trace in this list, this automatically logs some very important information (e.g. who dropped a table, and when) - which can be a life-saver!
In SQL Server High Availability with Amazon EC2, Denny Cherry discusses how to set up clustering in an environment that uses Amazon’s Elastic Compute Cloud (EC2) to host SQL Server databases. You can’t use native functionality because Amazon’s EC2 service can’t present the same disks to multiple virtual servers at the same time. Instead the author explains how to use SIOS Technology’s DataKeeper Cluster Edition to fulfil this function. If you use EC2 and need High Availability, this article is a must!
Gather SQL Server Instance Information with PowerShell, another article that is also freely available online, starts by discussing setting up your environment using the SQLPS module, which gives you access to the Server Management Objects (SMO) API. The SMO API can be used by PowerShell to gather lots of useful information about the SQL Server environment, specifically the code provided gets the instance name, edition, version, the number of CPUs and memory on the box, and various path locations. While the use of PowerShell is growing apace, I do wonder why we need yet another introductory article so soon after other recent ones in this magazine.
There’s a discussion on how to Evaluate Logical Expressions Using Recursive CTEs and Reverse Polish Notation. The article, also online, uses the example of an employee table which contains attributes for each employee, these attributes can be resolved to Booleans. Using a combination of CTEs and Reverse Polish Notation (RPN) allows you to determine what each employee can do. If you have a problem of this nature, this solution is both interesting and innovative.
In Using SQL Server Spatial Data with .NET Applications, the author discusses what Spatial datatypes are, and their ease of use in .NET code. What’s often surprising is the amount of functionality provided in the .NET library for these datatypes, they contain properties based on years of mapmaking. The author provides a brief book review of Pro Spatial with SQL Server 2012 which he found essential reading. It should be noted that Spatial data is vital for location aware applications, so there will be an increasing demand for it with the growth of mobile phone apps.
I enjoyed this issue, it contains a very good mix of articles: must haves (corruption and useful logs), interesting and innovative (puzzles, and CTEs with RPN), together with eagerly awaited new technology (in-memory OLTP engine, and hot trends).