Alibaba Releases Next-Generation MoE-based Qwen2.5-Max, Surpassing DeepSeek V3
Explore Qwen2.5-Max, Alibaba's groundbreaking ultra-large MoE model pushing AI's limits with unmatched performance. Try its multimodal capabilities today!
"AI Disruption" publication New Year 30% discount link.
The progress of open-source AI is truly insane!
While DeepSeek R1 caused a market earthquake, Alibaba's Tongyi Qianwen dropped another heavy bombshell: Qwen2.5-Max.
Regarding the path to AGI, it seems we’ve all been too eager.
For a long time, the common belief was that by scaling up models and data, we’d get closer to AGI. But the reality is that we actually know very little about training ultra-large models.
Today’s Qwen2.5-Max represents an exploration into ultra-large MoE models.
This model, based on the MoE architecture, undergoes massive pretraining with huge datasets, followed by selective SFT and RLHF methods for fine-tuning, using over 20 trillion tokens of training data.
Looking at the data, it's impressive. The arena-hard score is 89.4, ahead of DeepSeek V3 at 85.5 and GPT-4o at 77.9. It performs excellently on tests like MMLU-Pro and GPQA-Diamond.
It surpasses DeepSeek V3 across multiple key metrics, including Arena-Hard and LiveBench, and competes directly with top closed-source models.