AI Disruption

AI Disruption

Share this post

AI Disruption
AI Disruption
Language ≠ Thought, Large Models Can't Learn Reasoning: A Nature Article Sparks an Uproar in the AI Community

Language ≠ Thought, Large Models Can't Learn Reasoning: A Nature Article Sparks an Uproar in the AI Community

MIT Study Reveals Shocking Truth: Language Skills Don’t Equal Intelligence in AI

Meng Li's avatar
Meng Li
Jul 07, 2024
∙ Paid
1

Share this post

AI Disruption
AI Disruption
Language ≠ Thought, Large Models Can't Learn Reasoning: A Nature Article Sparks an Uproar in the AI Community
1
Share

Got it all wrong?

Why are large language models (LLMs) lacking in spatial intelligence? How does training GPT-4 with non-language data make it smarter? Now, there are "standard answers" to these questions.

Recently, an article published by MIT and other institutions in the top journal Nature observed that the neural networks in the human brain responsible for generating and parsing language do not handle formal reasoning. It also suggested that reasoning does not require language as a medium.

The paper claims, "Language is primarily a tool for communication rather than thought and is not necessary for any form of tested reasoning," sparking significant discussion in the tech community.

Could it be that, as linguist Noam Chomsky said, the hype around ChatGPT is a waste of resources, and the path to general artificial intelligence (AGI) through large language models is completely wrong?

This post is for paid subscribers

Already a paid subscriber? Sign in
© 2025 Meng Li
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture

Share