AI Disruption

AI Disruption

Share this post

AI Disruption
AI Disruption
New SOTA 32B Model: Open, Free, 1/20 Size of DeepSeek-R1
Copy link
Facebook
Email
Notes
More

New SOTA 32B Model: Open, Free, 1/20 Size of DeepSeek-R1

Skywork-OR1: Powerful 32B open model, free for commercial use. 1/20th size of DeepSeek-R1, beats Qwen-32B. Weights, code & data fully open!

Meng Li's avatar
Meng Li
Apr 14, 2025
∙ Paid
2

Share this post

AI Disruption
AI Disruption
New SOTA 32B Model: Open, Free, 1/20 Size of DeepSeek-R1
Copy link
Facebook
Email
Notes
More
1
Share

"AI Disruption" Publication 5800 Subscriptions 20% Discount Offer Link.


The most powerful reasoning model within 100 billion parameters has just changed hands.

32B—1/20th the parameter size of DeepSeek-R1; free for commercial use; and fully open-source—model weights, training datasets, and complete training code are all open-sourced.

This is the newly unveiled Skywork-OR1 (Open Reasoner 1) series models—

The general-purpose 32B model (Skywork-OR1-32B) completely surpasses Alibaba’s QwQ-32B of the same scale; its code generation rivals DeepSeek-R1 but offers higher cost-efficiency.

This post is for paid subscribers

Already a paid subscriber? Sign in
© 2025 Meng Li
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture

Share

Copy link
Facebook
Email
Notes
More