Liang Wenfeng Open-Sources Memory—DeepSeek V4
DeepSeek open-sources Engram, a 27B conditional memory module that boosts LLM knowledge and reasoning without extra compute.
“AI Disruption” Publication 8600 Subscriptions 20% Discount Offer Link.
Just a dozen hours ago, DeepSeek released a new paper titled “Conditional Memory via Scalable Lookup: A New Axis of Sparsity for Large Language Models,” completed in collaboration with Peking University, with Liang Wenfeng also listed as an author.
To briefly summarize the problem this new research addresses: Current large language models mainly achieve sparsification through Mixture of Experts (MoE), which is called “conditional computation.” However, existing Transformers lack a native knowledge lookup mechanism and are forced to inefficiently simulate retrieval behavior through computational processes.
In response to this situation, DeepSeek has proposed conditional memory to complement MoE’s conditional computation, implementing it through a new module called Engram.
Currently, the implementation related to the “Engram” module has been uploaded to GitHub.
Project URL: https://github.com/deepseek-ai/Engram
DeepSeek is back!
Furthermore, combined with the research “mHC: Manifold-Constrained Hyper-Connections” announced during the New Year’s period, we can confirm that the shape of DeepSeek v4 is becoming increasingly clear—we’re just waiting for the release!




