| Europe Gets Its Own LLM |
| Written by Nikos Vaggalis | |||
| Monday, 10 November 2025 | |||
|
EuroLLM is a fully open-sourced large language model made in Europe and built to support all twenty-four official EU languages. While several European states have separately produced their own LLMs, such as Greece's "Meltemi" or the recent offering from Switzerland "Apertus", there was no solution that catered for the languages of all 24 states belonging to the EU block. The time has come for this status to change with the appearance of EuroLLM, a foundational model that supports them all. Again, this is yet another venture in pushing forward Europe's sovereignty plans which include disengaging from the major US services/LLMs providers of giants Google, Meta or OpenAI. This try falls into the "Strong data infrastructure" category, which suggests that to render the EU competitive in this new AI-dominated era there's need for a strong data infrastructure that ensures interoperability and can support AI development while protecting citizen's rights and European values. To put that in practice, EuroLLM required the deep cooperation between several EU entities; the University of Edinburgh, Sorbonne University, University of Amsterdam, Horizon Europe, the European Research Council, to name a few. EuroLLM has been trained on multiple languages and several data sources such as Web data and high-quality datasets and comes in several versions: EuroLLM-9B EuroLLM-1.7B And soon to be released, a version with a whooping 22B parameters, EuroVLM-9B with a vision encoder model and EuroMoE-2.6B, a sparse mixture-of-experts model for edge devices. So there's something for any use case. You can get started with it very easily as all the model versions are open-sourced on HuggingFace. Open-sourced means all major components, including the base and instruction-tuned models, the EuroFilter classifier, and the synthetic post-training dataset. For instance to use the 9B parameter one, get it from HuggingFace and run: from transformers import AutoModelForCausalLM, AutoTokenizer model_id = "utter-project/EuroLLM-9B" text = "English: My name is EuroLLM. Portuguese:" inputs = tokenizer(text, return_tensors="pt")
More InformationRelated ArticlesSwitzerland Releases Its Own Large Language Model
To be informed about new articles on I Programmer, sign up for our weekly newsletter, subscribe to the RSS feed and follow us on Twitter, Facebook or Linkedin.
Comments
or email your comment to: comments@i-programmer.info |


