AI Disruption

AI Disruption

Liang Wenfeng Open-Sources Memory—DeepSeek V4

DeepSeek open-sources Engram, a 27B conditional memory module that boosts LLM knowledge and reasoning without extra compute.

Meng Li's avatar
Meng Li
Jan 13, 2026
∙ Paid

“AI Disruption” Publication 8600 Subscriptions 20% Discount Offer Link.


Just a dozen hours ago, DeepSeek released a new paper titled “Conditional Memory via Scalable Lookup: A New Axis of Sparsity for Large Language Models,” completed in collaboration with Peking University, with Liang Wenfeng also listed as an author.

Image

To briefly summarize the problem this new research addresses: Current large language models mainly achieve sparsification through Mixture of Experts (MoE), which is called “conditional computation.” However, existing Transformers lack a native knowledge lookup mechanism and are forced to inefficiently simulate retrieval behavior through computational processes.

In response to this situation, DeepSeek has proposed conditional memory to complement MoE’s conditional computation, implementing it through a new module called Engram.

Currently, the implementation related to the “Engram” module has been uploaded to GitHub.

Project URL: https://github.com/deepseek-ai/Engram

DeepSeek is back!

Furthermore, combined with the research “mHC: Manifold-Constrained Hyper-Connections” announced during the New Year’s period, we can confirm that the shape of DeepSeek v4 is becoming increasingly clear—we’re just waiting for the release!

User's avatar

Continue reading this post for free, courtesy of Meng Li.

Or purchase a paid subscription.
© 2026 Meng Li · Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture