Databricks Delta Lake Now Open Source
Written by Kay Ewbank   
Friday, 26 April 2019

At the Spark +AI Summit taking place this week in San Francisco, Databricks announced that it has open sourced its Delta Lake storage layer, which handles the stage where data is brought into an organization's data lake.

Databricks was created as a company by the original developers of Apache Spark and specializes in commercial technologies that make use of Spark. Until now, Delta Lake has been part of Databricks Delta, the proprietary stack from Databricks. It is a unified analytics engine and associated table format built on top of Apache Spark.

 

databricks

Delta Lake is a storage layer that stores data in Apache Parquet format. It is designed for use in data lakes that are built on HDFS and cloud storage.

Data lakes are used to store both structured and unstructured data, but the data can be unreliable because of problems including schema mismatches and no enforcing of consistency.  Data can be missing from some columns, and inconsistencies can creep in when schemas are changed in some parts of a pipeline but not in others.

Databricks Delta keeps closer control over the schemas in different parts of the data lake, validating that schema changes are replicated throughout the pipeline. Missing columns of data are correctly set to null, and data definition language (DDL) is used to add new columns and update schemas.

These features and the use of optimistic concurrency control between writes, and snapshot isolation for consistent reads during writes,  mean that Delta Lake offers ACID transaction support.  Delta Lake also uses snapshots to give data versioning for rollbacks and reproducing reports. The tool has options such as schema enforcement, and all data in Delta Lake is stored in Apache Parquet format, a favorite format for storing and working with large datasets.
 

Another advantage Delta Lake offers is that you can carry out local development and debugging to develop data pipelines on your desktop or laptop machine. Delta Lake uses the Spark engine for the metadata of the data lake, and is compatible with the Apache Spark APIs. 

Databricks says Delta is 10 -100 times faster than Apache Spark on Parquet.  It has been designed for both batch and stream processing, and can be used for pipeline development, data management, and query serving.

Now that Delta Lake is open source, Databricks is open to contributions from outside the company. 
 
databricks 
 
 

More Information

Databricks Website

Delta Website

Related Articles

Databricks Delta Adds Faster Parquet Import

Databricks Runtime for Machine Learning

Databricks Adds ML Model Export

Spark Gets NLP Library

Apache Spark With Structured Streaming

Spark BI Gets Fine Grain Security

Spark 2.0 Released

 

To be informed about new articles on I Programmer, sign up for our weekly newsletter, subscribe to the RSS feed and follow us on, Twitter, Facebook or Linkedin.

Banner


A Raspberry Pi Hi Speed Camera For $6
10/08/2019

Taking high frame rate video is both fun and often useful, but a video camera that can do the job is very expensive. A small project involving a Raspberry Pi and its $6 camera has produced reasonably  [ ... ]



Visual Basic Problems From Windows Update
16/08/2019

There's a problem for applications based on Visual Basic for all Windows machines that have installed the cumulative updates for August. The issue, raised by Microsoft, says that apps may stop respond [ ... ]


More News

appC

 



 

Comments




or email your comment to: comments@i-programmer.info

Last Updated ( Friday, 26 April 2019 )