Claude Intelligence Downgrade: User Trust Erodes
Anthropic admits Claude model intelligence downgrade & ongoing issues. Users report laziness & trust erosion. Learn about the $13B funding aftermath.
"AI Disruption" Publication 7600 Subscriptions 20% Discount Offer Link.
Do you still remember that every time OpenAI releases new features or new models, there are always some comments claiming that existing model capabilities have declined, with voices suspecting the "dumbing down" phenomenon of large models never ceasing.
Excluding some significant downgrades caused by OpenAI's user tiering mechanisms for accounts in certain regions, ordinary users also feel that large models occasionally encounter problems.
When I tested GPT-5, I felt the model's capabilities didn't meet expectations, and I also wondered whether there was a "dumbing down" phenomenon.
But regardless, large model providers seemed to have never directly acknowledged the "dumbing down" problem of models before, and users' perceptions were also vague.
OpenAI research scientist Aidan McLaughlin tweeted about this phenomenon a couple of days ago.
His point was that everyone (including himself) often mistakenly believes that a certain AI model has been "weakened" by the lab, and this misperception occurs at a rate far higher than he expected. He even thinks this is a common psychological illusion that should be defined as a new psychological phenomenon.