Google Open-Sources Gemma 4, Beats 13x Larger Qwen3.5
Google open-sources Gemma 4, outperforming models 20x larger with Apache 2.0 license.
“AI Disruption” Publication 9300 Subscriptions 20% Discount Offer Link.
This Thursday, Google open-sourced the Gemma 4 series, currently the strongest model family in the open-source world.
Built on the same research breakthroughs as Gemini 3, the new models achieved 3rd place globally on the Arena AI leaderboard, surpassing models with 20 times more parameters. More importantly, this generation of Gemma uses the Apache 2.0 open-source license, enabling complete commercial freedom.
Gemma 4 is Google DeepMind’s latest series of open models. They are multimodal models capable of processing text and image inputs (with small models also supporting audio input) and generating text output. This release includes both pre-trained and instruction-tuned open-weight models. Gemma 4 features a context window of up to 256,000 tokens and supports over 140 languages.
Google stated that Gemma 4 employs both dense and Mixture-of-Experts (MoE) architectures, making it highly suitable for tasks such as text generation, coding, and reasoning. The series includes four different sizes: E2B, E4B, 26B, A4B, and 31B. Thanks to their varying sizes, these models can be deployed across a wide range of environments—from high-end smartphones and laptops to servers—allowing more people to access cutting-edge AI.
Notably, the largest 31B version can perform full-precision inference on a single 80GB H100 GPU, demonstrating performance comparable to Qwen 3.5 397B.



