Google Launches Gemma 3: Top Single-GPU Multimodal Model for Mobile, 27B Beats o3-mini
Google Gemma 3: Open-source, multimodal AI with 128k context, 27B model outperforms o3-mini. Runs on single GPU/TPU, supports 140+ languages, and excels in math, coding, and vision tasks.
"AI Disruption" Publication 5000 Subscriptions 20% Discount Offer Link.
Google Gemma 3 Family is Here!
Just now, at the Paris Developer Day, the open-source Gemma series models officially iterated to the third generation, natively supporting multimodal capabilities and a 128k context length.
This time, Gemma 3 has open-sourced four parameter sizes: 1B, 4B, 12B, and 27B. The most crucial point is that the models can run on a single GPU/TPU.
In the LMArena competition, Gemma 3 scored a high 1339 ELO, defeating o1-preview, o3-mini high, and DeepSeek V3 with just 27B parameters, making it the best open-source model second only to DeepSeek R1.