Is AI becoming duller now that it has already peaked?
“After all the hype for me, it was kind of a big disappointment.”
You’re not alone if you believe AI systems like OpenAI’s ChatGPT to be less intelligent than they formerly were.
Writer Steven Vaughan-Nichols takes aim at all the big publicly-accessible AI models, citing ChatGPT and Claude as examples of flagship models that don’t perform as well as earlier iterations in a scathing opinion piece for Computerworld.
“Indeed, all too often, the end result is annoying and obnoxiously wrong,” he states.
“To make matters worse, it’s wildly incorrect. I could get around it if I could expect its responses to be passable but rather accurate. I’m unable to.”
Users contributing to the OpenAI developer forum had also observed a notable drop in accuracy following the introduction of the most recent version of GPT last year, according to a Business Insider story that he flagged.
In June of this year, a user posted, “After all the hype for me, it was kind of a big disappointment.”
It goes without saying that this isn’t how things are meant to operate. It is commonly assumed that software versions that are more recent perform better than those that they replace.
What then is causing the decline in quality?
We were amazed that these AIs could operate at all, but it’s possible that they were never quite as powerful as they appeared to be (remember that their training material was taken from sites like Reddit and Twitter).
However, Vaughan-Nichols believes that another potential reason is that in addition to the garbage from social media, AIs are increasingly skimming material generated by AIs, which is gradually eroding their capacities.
Vaughan-Nichols is talking about the concept of “model collapse,” a phenomena where AI models degrade when fed AI-generated data.
This phenomenon is becoming more and more common as the Internet fills with more and more AI-generated garbage, such as text and photos.
“We find that indiscriminate use of model-generated content in training causes irreversible defects in the resulting models, in which tails of the original content distribution disappear,” according to a report published in Nature last month.
When the human world runs out of high-quality information, as some experts predict will happen by 2026, this will become an increasingly serious issue.
We’re not holding our breath, however it’s feasible that we will eventually come to appreciate the worth of the valuable and unique work that humans have produced.
Discover more from
Subscribe to get the latest posts sent to your email.