Europe Gets Its Own LLM
Written by Nikos Vaggalis   
Monday, 10 November 2025

EuroLLM is a fully open-sourced large language model made in Europe and built to support all twenty-four official EU languages.

While several European states have separately produced their own LLMs, such as Greece's "Meltemi" or the recent offering from Switzerland "Apertus", there was no solution that catered for the languages of all 24 states belonging to the EU block. The time has come for this status to change with the appearance of EuroLLM, a foundational model that supports them all.

Again, this is yet another venture in pushing forward Europe's sovereignty plans which include disengaging from the major US services/LLMs providers of giants Google, Meta or OpenAI.

This try falls into the "Strong data infrastructure" category, which suggests that to render the EU competitive in this new AI-dominated era there's need for a strong data infrastructure that ensures interoperability and can support AI development while protecting citizen's rights and European values.

To put that in practice, EuroLLM required the deep cooperation between several EU entities; the University of Edinburgh, Sorbonne University, University of Amsterdam, Horizon Europe, the European Research Council, to name a few.

EuroLLM has been trained on multiple languages and several data sources such as Web data and high-quality datasets and comes in several versions:

EuroLLM-9B
The heavy weight option. 9B parameters trained on over 4 trillion tokens of multilingual data across 35 different languages, including all official EU languages. EuroLLM-9B-Instruct was further instruction tuned on EuroBlocks, an instruction tuning dataset with focus on general instruction-following and machine translation. Perfect to base upon and fine-tune on any task.

EuroLLM-1.7B
The lightweight option; A 1.7B parameter model for use in edge devices.

And soon to be released, a version with a whooping 22B parameters, EuroVLM-9B with a vision encoder model and EuroMoE-2.6B, a sparse mixture-of-experts model for edge devices. So there's something for any use case.

You can get started with it very easily as all the model versions are open-sourced on HuggingFace. Open-sourced means all major components, including the base and instruction-tuned models, the EuroFilter classifier, and the synthetic post-training dataset. For instance to use the 9B parameter one, get it from HuggingFace and run:

from transformers import AutoModelForCausalLM, AutoTokenizer

model_id = "utter-project/EuroLLM-9B"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)

text = "English: My name is EuroLLM. Portuguese:"

inputs = tokenizer(text, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=20)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))


Evaluation results on multilingual benchmarks and machine translation tasks, establish the 9B parameter as the leading open European-made LLM of its size.


Just imagine what the soon to come 22B version's capabilities will be!

 

More Information

EuroLLM

Related Articles

Switzerland Releases Its Own Large Language Model

 

To be informed about new articles on I Programmer, sign up for our weekly newsletter, subscribe to the RSS feed and follow us on Facebook or Linkedin.

Banner


Angular 21 Adds Signal Forms
08/12/2025

Angular 21 has been released with experimental support for Signal Forms, a developer preview of Angular Aria, and the ability to use Angular's MCP Server via AI Agents. 



Kotlin 2.3 Improves Swift Interop
27/11/2025

Kotlin 2.3 is available now as a release candidate. The new version adds a new checker for unused return values, and changes to context-sensitive resolution. The release candidate adds support for Jav [ ... ]


More News

pico book

 

Comments




or email your comment to: comments@i-programmer.info