AI Disruption

AI Disruption

Share this post

AI Disruption
AI Disruption
Mistral's First Strong Inference Model: Open Source, 10x Faster
Copy link
Facebook
Email
Notes
More

Mistral's First Strong Inference Model: Open Source, 10x Faster

Mistral AI launches Magistral: 10x faster reasoning, open-source & proprietary LLMs. Top benchmarks in AIME2024, GPQA & LiveCodeBench.

Meng Li's avatar
Meng Li
Jun 11, 2025
∙ Paid
7

Share this post

AI Disruption
AI Disruption
Mistral's First Strong Inference Model: Open Source, 10x Faster
Copy link
Facebook
Email
Notes
More
4
Share

"AI Disruption" Publication 6800 Subscriptions 20% Discount Offer Link.


Mistral Introduces 'Magistral' AI Model Series for Advanced Reasoning Tasks

Strong reasoning is finally starting to compete on speed. A heavyweight player has joined the race for powerful reasoning in large models.

On Tuesday, European AI company Mistral AI released Magistral, a new series of large language models (LLMs) showcasing strong reasoning capabilities. It can engage in continuous reflection and tackle more complex tasks.

The release includes two versions: Magistral Medium, a proprietary model for enterprise customers, and Magistral Small, an open-source version with 24B parameters. The open-source version is licensed under Apache 2.0, allowing free use and commercialization, while Magistral Medium is accessible via Mistral’s Le Chat interface and La Plateforme API.

In benchmark tests, the new models performed impressively. The comparison mainly focuses on Magistral versus its predecessor, Mistral-Medium 3, and the DeepSeek series. Magistral Medium scored 73.6% on AIME2024, with a majority vote score of 64% and a score of 90%. Magistral Small scored 70.7% and 83.3%, respectively.

This post is for paid subscribers

Already a paid subscriber? Sign in
© 2025 Meng Li
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture

Share

Copy link
Facebook
Email
Notes
More