New SOTA 32B Model: Open, Free, 1/20 Size of DeepSeek-R1
Skywork-OR1: Powerful 32B open model, free for commercial use. 1/20th size of DeepSeek-R1, beats Qwen-32B. Weights, code & data fully open!
"AI Disruption" Publication 5800 Subscriptions 20% Discount Offer Link.
The most powerful reasoning model within 100 billion parameters has just changed hands.
32B—1/20th the parameter size of DeepSeek-R1; free for commercial use; and fully open-source—model weights, training datasets, and complete training code are all open-sourced.
This is the newly unveiled Skywork-OR1 (Open Reasoner 1) series models—
The general-purpose 32B model (Skywork-OR1-32B) completely surpasses Alibaba’s QwQ-32B of the same scale; its code generation rivals DeepSeek-R1 but offers higher cost-efficiency.