|Databricks Adds ML Model Export|
|Written by Kay Ewbank|
|Monday, 19 March 2018|
Databricks has added a machine learning model export feature that can be used to export models from Apache Spark MLib.
Apache Spark is a free community edition of the Databricks cloud-based big data platform. It is implemented in Scala and Java, runs on a cluster, and improves on Hadoop MapReduce performance, running programs up to 100 times faster in memory and ten times faster on disk, according to Apache. The Databricks commercial product is its Unified Analytics Platform. This run an optimized version of Spark that can be between 10 and 40 times faster, along with interactive notebooks, integrated workflows, and full enterprise security.
The new feature is the Databricks ML Model Export, and it can be used to export models and full machine learning pipelines from Apache Spark MLlib. These exported models and pipelines can be imported into other (Spark and non-Spark) platforms to do scoring and make predictions. This new feature is designed to provide an alternative to batch and streaming prediction within Spark. The Model Export lets you achieve very low latency in the milliseconds range, and paves the way to using ML models and pipelines in custom deployments.
MLlib models are exported as JSON files, with a format matching the Spark ML persistence format. The key changes from MLlib’s format are the use of JSON instead of Parquet, and the addition of extra metadata. This extra metadata allows scoring outside of Spark.
The list of supported models starts with full ML pipelines that contain supported transformers and models. The pipelines must be trained. Specific model types that can be exported in this release are decision tree classifier; decision tree regression; logistic regression; random forest classifier; and random forest regression. Support for more model types will be added in future releases.
or email your comment to: firstname.lastname@example.org
|Last Updated ( Monday, 19 March 2018 )|