Google Releases FunctionGemma Model
Written by Nikos Vaggalis   
Wednesday, 21 January 2026

FunctionGemma is an open-source Google model fine-tuned for function calling that aims to revolutionize the way we interact with our devices.

Technically, FunctionGemma is a specialized version of the Gemma 3 270M model which is dedicated to function calling. But first of all, what is Gemma? We first met it in February 2024, see Google Releases Gemma Open Models, when we reported: 

Google has released a set of lightweight open models that have been built from the same research and technology used to create Google's recent Gemini models. The models in Gemma are text-to-text, decoder-only large language models, available in English, with open weights, pre-trained variants, and instruction-tuned variants.

Those Gemma models serve as a basis for more specialized models, like EmbeddingGemma which we explored in Google Pioneers On-Device Embedding:

EmbeddingGemma is a new open embedding model that delivers value for money for its size. Based on the Gemma 3 architecture, it is trained on 100+ languages and is small enough to run on less than 200MB of RAM with quantization.

Like EmbeddingGemma, FunctionGemma is also based on Gemma and targets low resource devices.

The issue that it mainly addresses is that nowadays AI assistants require a connection to the Cloud in order to process the voice commands issued by the device user, which means that there's less privacy and more latency. FunctionGemma turns that on its head by being 100% local, fast and private. It comprises 270 million parameters, runs locally and requires no servers and no cloud. That means that no data leaves your device.

Practically, it's designed for function calling by turning natural language requests into structured API/tool calls. It is the right tool if:

  • You have a defined API surface: Your application has a defined set of actions (e.g., smart home, media, navigation).

  • You are ready to fine-tune: You need the consistent, deterministic behavior that comes from fine-tuning on specific data, rather than the variability of zero-shot prompting.

  • You prioritize local-first deployment: Your application requires near-instant latency and total data privacy, running efficiently within the compute and battery limits of edge devices.

  • You are building compound systems: You need a lightweight edge model to handle local actions, allowing your system to process common commands on-device and only query larger models (like Gemma 3 27B) for more complex tasks.

That is, FunctionGemma is intended to be fine-tuned for task-specific tool calling and not intended for use as a direct dialogue model.

The official demo demonstrates that case of being fine-tuned to become a mobile actions assistant executing tasks like "Create a calendar event for lunch tomorrow," "Add John to my contacts" or "Turn on the flashlight,"

The model parses the natural language and identifies the correct OS tool to execute the command.

Google offers an official fine-tuning guide, and there's also a nice guide from Unsloth as well. Links below.

And with that, we are now officially entering the era of small, specialized, models that make true, local-first, smart, AI-powered applications a reality.

 

More Information

Google FunctionGemma fine-tuning cookbook

Unsloth guide:How to fine-tune FunctionGemma and run it locally 

Related Articles

Google Releases Gemma Open Models

Google Pioneers On-Device Embedding 

 

To be informed about new articles on I Programmer, sign up for our weekly newsletter, subscribe to the RSS feed and follow us on Facebook or Linkedin.

Banner


Docker Releases Hardened Images For Free - What Does It Do Differently?
08/01/2026

Yet another provider jumps on the hardened image bandwagon. But since it's Docker, the main player in the container space, this is very important.



Udacity Offers 40% Off
29/12/2025

Udacity is encouraging us to put the holiday time to good use with a special offer running until the New Year - 40% off across all its programs when you pay in advance. 


More News

pico book

 

Comments




or email your comment to: comments@i-programmer.info

Last Updated ( Wednesday, 21 January 2026 )