AI Disruption

AI Disruption

Switched to Kimi K2.5 and GLM-5, can't go back to Claude Code

Switch Claude Code to Kimi K2.5 and GLM 5 for massive cost savings with equal dev performance. One command model switching and plan mode tips included.

Meng Li's avatar
Meng Li
Feb 12, 2026
∙ Paid

“AI Disruption” Publication 8800 Subscriptions 20% Discount Offer Link.


How to Use Kimi k2.5 with Claude Code and Ollama (Free Setup) | Towards AI

I’ve been using Cursor and Claude Code for quite a long time now. My current daily workflow is roughly: Cursor takes up 60%, and Claude Code takes up 40%. Each tool has its strengths and weaknesses, but if I were only using Claude’s official models, the ratio would probably flip the other way around.

Previously, I had configured Claude Code with Claude Opus 4.5, and the token consumption was extremely high.

Now Claude Code has access to Kimi K2.5 and GLM-5.

The cost-performance ratio just skyrockets!!! In most everyday development scenarios, their efficiency is completely on par with (or even matches) Claude and OpenAI.

Looking at OpenRouter’s weekly leaderboard and the OpenClaw request volume rankings, Kimi K2.5 has been dominating the #1 spot for a long time, far ahead of Gemini 3.

User's avatar

Continue reading this post for free, courtesy of Meng Li.

Or purchase a paid subscription.
© 2026 Meng Li · Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture