Introducing DeepSpeech
Written by Sue Gee   
Wednesday, 29 April 2020

DeepSpeech 0.7.0 is the latest version of Mozilla's open source speech-to-text engine. It was released this week together with new acoustic models trained on American English and a new format for training data that should be faster.

DeepSpeech 0.7.0, a TensorFlow implementation of Baidu's DeepSpeech architecture, is at the cutting edge of automatic speech recognition technology and yet it has gone largely under the radar.

In fact it is an open source project that Mozilla has been working on since 2016. Its 0.1.0 release was in November 2017 and by the time we first reported on it when version 0.6.0 was released in December 2019 it had already seen five updates the, in accord with semantic versioning were backward incompatible, as is the latest release.


So where did DeepSpeech spring from and how does it fit into the ongoing efforts of Mozilla Research into Speech & Machine Learning?

According to the project's documentation, its aim is to create a simple, open, and ubiquitous speech recognition engine.

  • Simple, in that the engine should not require server-class hardware to execute.
  • Open, in that the code and models are released under the Mozilla Public License.
  • Ubiquitous, in that the engine should run on many platforms and have bindings to many different languages. 

The architecture of the engine was originally based on the one developed by Baidu and presented in a 2014 paper, Deep Speech: Scaling up end-to-end speech recognition. It has since diverged in many respects from the engine it was motivated by and the core of the engine is a recurrent neural network (RNN) trained to ingest speech spectrograms and generate English text transcriptions.



DeepSpeech is composed of two main subsystems: an acoustic model and a decoder. The acoustic model is a deep neural network that receives audio features as inputs, and outputs character probabilities. The decoder uses a beam search algorithm to transform the character probabilities into textual transcripts that are then returned by the system.

The speech samples used come from a Mozilla project that we encountered at the beginning of 2019 - Common Voice, described as a "voice donation" project to improve virtual assistants.

Firefox needs technologies like DeepSpeech to keep up with the likes of Google Chrome, Google Home and Alexa. One of the most common reasons for not using Firefox is that Chrome has services such as translation. Mozilla really does need to get on top of open source AI.

Details of DeepSpeech 0..7.0 and notable changes from the previous release can be found on its GitHub repo, along with its source code and its two acoustic models.



More Information

DeepSpeech On GitHub

DeepSpeech In NuGet Gallery

Related Articles

Mozilla DeepSpeech Gets Smaller

Mozilla Labs Quietly Relaunched 

Adversarial Attacks On Voice Input

The State Of Voice As UI

Mozilla Layoffs Raise Questions

Why Mozilla Matters  

To be informed about new articles on I Programmer, sign up for our weekly newsletter, subscribe to the RSS feed and follow us on Twitter, Facebook or Linkedin.


CDN Serves Malware - 100,000 Polyfill Users At Risk

Back in the day, before modern JavaScript was all grown up, a lot of us resorted to polyfills to make up for browsers not supporting the very latest features. It looks as if that choice is coming to b [ ... ]

Andrew Tanenbaum Gains ACM Award

Andrew Tanenbaum has been awarded the 2023 ACM System Software Award for MINIX the operating system he created for teaching purposes and which was an important influence on Linux.

More News

kotlin book



or email your comment to:


Last Updated ( Wednesday, 29 April 2020 )