T5Gemma Updated Again: Google Sticks with Encoder-Decoder
Google drops T5Gemma 2: 128K context, multimodal, 140+ languages—encoder-decoder fights back.
“AI Disruption” Publication 8400 Subscriptions 20% Discount Offer Link.
Recently, perhaps as the year draws to a close, Google’s releases have become somewhat frequent. For example, yesterday, Google released Gemini 3 Flash, the model with the best global price-performance ratio in terms of intelligence and cost.
After the release of Gemini 3 Flash, when everyone thought Google’s model releases for the year had concluded, Google unexpectedly pulled out a model update that surprised everyone: T5Gemma 2.
The T5Gemma series of models doesn’t seem to have left much of a lasting impression on the public. In July of this year, Google released the T5Gemma model series for the first time, launching 32 models all at once.
From the model name, it’s clear that the T5Gemma series is closely related to T5. T5 (Text-to-Text Transfer Transformer) is an Encoder-Decoder large model framework proposed by Google in 2019. The conceptual origins of “encoder-decoder large models” can almost all be traced back to T5.
T5Gemma uses “adaptation” technology to convert a pre-trained decoder-only model into an encoder-decoder architecture.
Unfortunately, the “encoder-decoder architecture” has never become mainstream in the large model world, and under the backdrop of rapidly iterating “decoder-only” large language models, it faces the fate of gradual marginalization.
Google is among the few players still persisting with encoder-decoder architecture large models.



