Amazon has announced the immediate availability of the beta of Amazon DynamoDB, a fully managed NoSQL database service.
DynamoDB has been created by taking Amazon’s in-house NoSQL database, Dynamo, and putting it into a form suitable for external use as a service. Amazon’s chief technology officer Werner Vogel made the announcement of the release in a webcast where he described DynamoDB as being:
“the result of everything we've learned for building large non-relational databases for Amazon.com, and from building scalable, high-reliability cloud services for Amazon Web Services."
Adding the database as a service to the Amazon offering means it can compete against systems such as Windows Azure and Google App Engine, as well as traditional database providers such as Oracle with its cloud-based Oracle on Demand.
One strong point of DynamoDB, according to Amazon, is its ability to deal with situations where applications experience explosive growth, when traditional databases require reworking to distribute their workload across multiple servers. Amazon says DynamoDB provides automatic partitioning and re-partitioning of data as needed to meet latency and throughput requirements of highly demanding applications. The pricing structure of DynamoDB means companies will be able to dial their table's request capacity up or down and pay for only the resources they need.
Amazon is promising fast, predictable performance at any scale, with average latencies in the single-digit milliseconds for database operations. DynamoDB stores data on Solid State Drives and replicates it synchronously across multiple AWS Availability Zones in an AWS Region to achieve high availability. This promo video explains what it is and makes the case for adopting it:
In a post about the new service on his blog, Vogel says that DynamoDB has been developed based not only on technological advances, but on what users actually want. He says that the original NoSQL Dynamo has been used by a number of core services in the Amazon ecommerce platform, and their engineers have been very satisfied by its performance and incremental scalability, adding:
“However, we never saw much adoption beyond these core services. This was remarkable because although Dynamo was originally built to serve the needs of the shopping cart, its design and implementation were much broader and based on input from many other service architects.”
The problem was that while Dynamo’s reliability, performance, and scalability were all fine, it was complex to run, and most of the engineers preferred alternatives such as Amazon S3 and Amazon SimpleDB, which were built as managed web services that eliminated the operational complexity of managing systems while still providing extremely high durability.
By contrast SimpleDB has limitations in the fact its domains are limited to 10GB, the read latency grows as dataset sizes increase, there were drawbacks to the way data consistency was managed, and the pricing model was complex.
Vogel says that
“We concluded that an ideal solution would combine the best parts of the original Dynamo design (incremental scalability, predictable high performance) with the best parts of SimpleDB (ease of administration of a cloud service, consistency, and a table-based data model that is richer than a pure key-value store).”
The blog post goes into a lot of detail about just what DynamoDB offers, and is well worth reading. One point made by many of the people commenting on the blog post is that the initial beta version has no option for taking snapshot backups, which for many customers will be a deal breaker. Vogel says this will be a high priority in future iterations.
If you’re based in the USA, you can start using DynamoDB for free as Amazon is offering a free tier (http://aws.amazon.com/free/). Other regions are due to be added in the coming months.
Google's computational package aimed at making AI easier, TensorFlow, is a little over a year old. Even so, at the TensorFlow Developer Summit, it has been deemed grown up enough to be called 1.0. It [ ... ]