Google Releases Gemma Open Models
Written by Kay Ewbank   
Wednesday, 28 February 2024

Google has released a set of lightweight open models that have been built from the same research and technology used to create Google's recent Gemini models.

The models in Gemma are text-to-text, decoder-only large language models, available in English, with open weights, pre-trained variants, and instruction-tuned variants.


The Gemma team says its models share technical and infrastructure components with Gemini, thus enabling the two sizes of models being introduced - Gemma 2B and 7B - to perform well for their sizes compared to other open models. Gemma models are capable of running directly on a developer laptop or desktop computer.

The Gemma team says the different sizes of model weights are being released with pre-trained and instruction-tuned variants. The release is accompanied by a new Responsible Generative AI Toolkit that provides guidance and essential tools for creating safer AI applications with Gemma. The toolkit has resources for applying best practices for responsible use of open models including guidance on setting safety policies, safety tuning, safety classifiers and model evaluation. It also has a tool called the Learning Interpretability Tool (LIT) that can be used to investigate Gemma's behavior and to address any potential issues.

Google is also providing toolchains for inference and supervised fine-tuning (SFT) across frameworks including JAX, PyTorch, and TensorFlow through native Keras 3.0. There are ready-to-use Colab and Kaggle notebooks, and the software is integrated with tools such as Hugging Face, MaxText, NVIDIA NeMo and TensorRT-LLM.

Google says Gemma is optimized across several AI hardware platforms including NVIDIA GPUs and Google Cloud TPUs. The cloud optimization comes via Vertex AI, which Google describes as providing a broad MLOps toolset with a range of tuning options and one-click deployment using built-in inference optimizations. Advanced customization is available with fully-managed Vertex AI tools or with self-managed GKE.

Alongside Google Gemma, several versions of the models are available on GitHub. There's an official Pytorch implementation of the models; a lightweight, standalone C++ inference engine for the Gemma foundation models; and an inference implementation and examples, based on Flax and JAX.

UPDATE: Google Gemma is available on Kaggle


More Information

Google Gemma

Gemma on Kaggle

Pytorch Implementation Of Gemma Models On GitHub

Lightweight C++ Inference Engine On GitHub

Inference Implementation Based on Flax and JAX

Related Articles

Google Rebrands Bard With Subscription

Google Adds Gemini To Bard

Google Adds Code Generation To Bard

To be informed about new articles on I Programmer, sign up for our weekly newsletter, subscribe to the RSS feed and follow us on Twitter, Facebook or Linkedin.


Rust's Rapid Rise on TIOBE Index

Rust is making spectacular progress up the TIOBE index and JavaScript is also on the up and experiencing a personal best. Kotlin is maintaining its inclusion in the top 20 and the gap at the very top  [ ... ]

Mbed Is Dead - Thanks Arm

Fifteen years ago, ARM decided that it would be good to "help" IoT projects by creating a common OS and development environment for ARM-based development boards and brought us Mbed. Now we have until  [ ... ]

More News

kotlin book



or email your comment to:

Last Updated ( Wednesday, 08 May 2024 )