Google Releases Gemma Open Models
Written by Kay Ewbank   
Wednesday, 28 February 2024

Google has released a set of lightweight open models that have been built from the same research and technology used to create Google's recent Gemini models.

The models in Gemma are text-to-text, decoder-only large language models, available in English, with open weights, pre-trained variants, and instruction-tuned variants.


The Gemma team says its models share technical and infrastructure components with Gemini, thus enabling the two sizes of models being introduced - Gemma 2B and 7B - to perform well for their sizes compared to other open models. Gemma models are capable of running directly on a developer laptop or desktop computer.

The Gemma team says the different sizes of model weights are being released with pre-trained and instruction-tuned variants. The release is accompanied by a new Responsible Generative AI Toolkit that provides guidance and essential tools for creating safer AI applications with Gemma. The toolkit has resources for applying best practices for responsible use of open models including guidance on setting safety policies, safety tuning, safety classifiers and model evaluation. It also has a tool called the Learning Interpretability Tool (LIT) that can be used to investigate Gemma's behavior and to address any potential issues.

Google is also providing toolchains for inference and supervised fine-tuning (SFT) across frameworks including JAX, PyTorch, and TensorFlow through native Keras 3.0. There are ready-to-use Colab and Kaggle notebooks, and the software is integrated with tools such as Hugging Face, MaxText, NVIDIA NeMo and TensorRT-LLM.

Google says Gemma is optimized across several AI hardware platforms including NVIDIA GPUs and Google Cloud TPUs. The cloud optimization comes via Vertex AI, which Google describes as providing a broad MLOps toolset with a range of tuning options and one-click deployment using built-in inference optimizations. Advanced customization is available with fully-managed Vertex AI tools or with self-managed GKE.

Alongside Google Gemma, several versions of the models are available on GitHub. There's an official Pytorch implementation of the models; a lightweight, standalone C++ inference engine for the Gemma foundation models; and an inference implementation and examples, based on Flax and JAX.

Google Gemma is available now. 


More Information

Google Gemma

Pytorch Implementation Of Gemma Models On GitHub

Lightweight C++ Inference Engine On GitHub

Inference Implementation Based on Flax and JAX

Related Articles

Google Rebrands Bard With Subscription

Google Adds Gemini To Bard

Google Adds Code Generation To Bard

To be informed about new articles on I Programmer, sign up for our weekly newsletter, subscribe to the RSS feed and follow us on Twitter, Facebook or Linkedin.


Quadrupedal Parkour

What is it with robots and parkour? First Atlas and now ANYmal want to impress us with their prowess. For the roboticist, however, emulating the skills of free running can enhance the capabilities of  [ ... ]

Explore SyncFusion's Blazor Playground

Syncfusion has provided an in-browser environment where you can write, compile and run code that uses Blazor components and get it previewed live.

More News

raspberry pi books



or email your comment to:

Last Updated ( Wednesday, 28 February 2024 )