AI Disruption
Subscribe
Sign in
Home
Podcast
Chat
Chip
Meta
Paper
Qwen
Agent
Robot
Cursor
Google
OpenAI
AI Code
AI Video
Amazon
Microsoft
Windsurf
Anthropic
DeepSeek
AI Weekly
Elon Musk
AI Writing
AI Painting
AI Business
🎈 Guest Posts
AI Open Source
Chinese Outbound
Foundation Model
Archive
About
Foundation Model
Latest
Top
Discussions
Qwen3.5 Local Deployment
Deploy Qwen3.5 397B locally with Unsloth Dynamic 2.0 quantization. Run on Mac or PC with llama.cpp, SGLang, MLX, and OpenAI compatible API.
Feb 17
Â
•
Â
Meng Li
10
1
Qwen3.5-Plus Released: Unbeatable Cost Performance
Meet Qwen3.5 Plus, the most powerful and affordable AI model yet. With breakthrough efficiency and multimodal native architecture, it delivers top-tier…
Feb 16
Â
•
Â
Meng Li
6
2
MiniMax M2.5: Tiny, near Opus level, dirt cheap, fast AF
MiniMax M2.5 official release: a powerful, affordable domestic model with near Opus-level performance. Ultra-fast 100 tokens/sec, $0.3 per 1M input…
Feb 13
Â
•
Â
Meng Li
10
3
Kimi K2.5: Most Powerful Open-Source Model Drops
Moonshot AI open-sources Kimi k2.5, the most powerful open-source model featuring native multimodal training, Visual Agentic Intelligence, and an Agent…
Jan 27
Â
•
Â
Meng Li
12
4
Zhipu’s New Model Also Uses DeepSeek’s MLA, Runs on Apple M5
Zhipu IPO follow-up: free 30B MoE GLM-4.7-Flash, 3B active, MLA power, M5 local 43 tok/s
Jan 20
Â
•
Â
Meng Li
8
4
MiniMax M2.1 Tops AI Coding
MiniMax M2.1 crushes SWE-bench, delivers Rust/Go/React code that runs—ready for Cursor & Claude Code.
Dec 24, 2025
Â
•
Â
Meng Li
5
2
Zhipu IPO-bound, grabs code-model SOTA overnight
GLM-4.7 open-source code model beats GPT-5.1, SWE-Bench 73.8%. Try free API now.
Dec 23, 2025
Â
•
Â
Meng Li
8
2
Mistral Open-Sources Devstral 2, Bans Big Biz Use
Mistral drops Devstral 2 & Vibe CLI: 72% SWE-bench, free API, big-corp license cap
Dec 10, 2025
Â
•
Â
Meng Li
4
4
Mistral AI Launches Mistral 3 Line, Back to Apache 2.0
Mistral 3 open models land: Apache 2.0, 675B MoE & 3-14B edge variants ready to deploy
Dec 3, 2025
Â
•
Â
Meng Li
4
4
Kimi K2 Thinking Launched, Outsmarting GPT-5
Kimi K2 Thinking: open-source 1T-MoE agent model beats GPT-5, MIT weights live on kimi.com
Nov 7, 2025
Â
•
Â
Meng Li
22
4
Kimi Linear: 6.3× Faster 1M-Token Decoding, 75% Less KV Cache
Kimi Linear: 6× faster, 75% leaner attention—rewriting Transformer limits.
Oct 31, 2025
Â
•
Â
Meng Li
12
6
MiniMax Open-Sources M2, Free Limited-Time
10B-active MiniMax-M2 tops open-source charts—code, agents, free API & weights
Oct 27, 2025
Â
•
Â
Meng Li
4
3
This site requires JavaScript to run correctly. Please
turn on JavaScript
or unblock scripts