My page - topic 1, topic 2, topic 3 Postbox Live

AI is getting better to be knowledgeable.”

Ai Is Getting Better To Be Knowledgeable.

It’s concerning that the most advanced AIs are most likely to lie. Study Discovers

Artificial intelligence (AI) is “getting better at pretending to be knowledgeable.”


Those who appear to be intelligent should be avoided as they have the ability to tell the most compelling lies.

This reasoning appears to hold true for huge language models as well, which are getting stronger with each iteration. According to new research, instead of avoiding or rejecting questions they are unable to answer, this intelligent generation of AI chatbots is actually growing less reliable.

The research team BigScience’s BLOOM open source model was analyzed in the study, which was published in the journal Nature. The study looked at some of the top commercial LLMs in the market, including Meta’s LLaMA and OpenAI’s GPT.
It was shown that although their answers were generally less dependable and provided a higher percentage of incorrect answers than previous models, they were in several situations getting more accurate.


“They respond to practically everything these days. And that implies both more accurate and more inaccurate [solutions],” coauthor José Hernández-Orallo of the study and researcher at Spain’s Valencian Research Institute for Artificial Intelligence told Nature.

Mike Hicks, a philosopher of science and technology at the University of Glasgow, had a harsher assessment.
“That looks to me like what we would call bullshitting,” Hicks, who was not involved in the study, told Nature. “It’s getting better at pretending to be knowledgeable.”


The models were quizzed on topics ranging from math to geography, and were also asked to perform tasks like listing information in a specified order.
The bigger, more powerful models gave the most accurate responses overall, but faltered at harder questions, for which they had a lower accuracy.

According to the researchers, some of the biggest BS-ers were OpenAI’s GPT-4 and o1, which would answer almost any question thrown at them. But all of the studied LLMs appear to be trending this way, and for the LLaMA family of models, none of them could reach a level of 60 percent accuracy for the easiest questions, the study said.


In sum, the bigger the AI models got in terms of parameters, training data, and other factors the bigger the percentage of wrong answers they gave.

Still, AI models are getting better at answering more complex questions. The problem, other than their propensity for BS-ing, is that they still mess up the easy ones. In theory, these errors should be a bigger red flag, but because we’re impressed at how the large language models handle sophisticated problems, we may be overlooking their obvious flaws, the researchers suggest.


As such, the work had some sobering implications about how humans perceive the AI responses. When asked to judge if the chatbots’ answers were accurate or inaccurate, a select group of participants got it wrong between 10 to 40 percent of the time.

The researchers conclude that programming the LLMs to be less eager to respond to every question is the most straightforward method to address the problems.
“You can put a threshold, and when the question is challenging, [get the chatbot to] say, ‘no, I don’t know,'” Naturally, Hernández-Orallo said.


Yet, being honest might not be the best course of action for AI firms trying to win over the public with their cutting-edge technology. Restricting chatbot responses to only those topics they were familiar with could reveal the boundaries of the technology.

 

 


Discover more from Postbox Live

Subscribe to get the latest posts sent to your email.

Leave a Reply

error: Content is protected !!

Discover more from Postbox Live

Subscribe now to keep reading and get access to the full archive.

Continue reading