Mistral's First Strong Inference Model: Open Source, 10x Faster
Mistral AI launches Magistral: 10x faster reasoning, open-source & proprietary LLMs. Top benchmarks in AIME2024, GPQA & LiveCodeBench.
"AI Disruption" Publication 6800 Subscriptions 20% Discount Offer Link.
Strong reasoning is finally starting to compete on speed. A heavyweight player has joined the race for powerful reasoning in large models.
On Tuesday, European AI company Mistral AI released Magistral, a new series of large language models (LLMs) showcasing strong reasoning capabilities. It can engage in continuous reflection and tackle more complex tasks.
The release includes two versions: Magistral Medium, a proprietary model for enterprise customers, and Magistral Small, an open-source version with 24B parameters. The open-source version is licensed under Apache 2.0, allowing free use and commercialization, while Magistral Medium is accessible via Mistral’s Le Chat interface and La Plateforme API.
In benchmark tests, the new models performed impressively. The comparison mainly focuses on Magistral versus its predecessor, Mistral-Medium 3, and the DeepSeek series. Magistral Medium scored 73.6% on AIME2024, with a majority vote score of 64% and a score of 90%. Magistral Small scored 70.7% and 83.3%, respectively.