OpenAI Open-Sources New Circuit-Sparsity Model
AI boosts your output 50 %—but rewrites your role into a plug-and-play module. Are you leveling up, or just formatting yourself for easier replacement?
“AI Disruption” Publication 8400 Subscriptions 20% Discount Offer Link.
Yesterday, OpenAI open-sourced a new model called Circuit-Sparsity, with only 0.4B parameters and 99.9% of its weights set to zero.
This technology attempts to solve the model interpretability problem—essentially answering two questions: “Why did the model make this decision?” and “How did it arrive at this result?”
In today’s rapidly advancing AI landscape, large language models (LLMs) have demonstrated remarkable capabilities, yet their internal mechanisms remain a mysterious “black box.” We don’t know why they produce certain answers, nor do we understand how they extract knowledge from massive datasets. This lack of interpretability has become a major barrier to AI deployment in high-stakes fields like healthcare, finance, and law.
To address this, OpenAI’s research team trained a weight-sparse Transformer model, forcing 99.9% of the weights in the model’s weight matrices to be zero, retaining only 0.1% of non-zero weights.
In this research, the team created compact and interpretable “Circuits” within the model. Each circuit preserves only the critical nodes necessary for model performance, making neuron activations semantically meaningful.
This technology challenges the current state of MoE (Mixture of Experts) models, stating: “We’ve been isolating weights into ‘experts’ all along to roughly approximate sparsity, merely to satisfy the requirements of dense matrix cores.”



