My page - topic 1, topic 2, topic 3 Postbox Live

Research: People depend too much on artificial intelligence

Research People Depend Too Much On Artificial Intelligence

Research: People depend too much on Artificial intelligence

when given a life-or-death decision.

 

 

 

In a study conducted at UC Merced, researchers found that over two-thirds of participants allowed a robot to change their choices when it disagreed with them in simulated life-or-death situations. This troubling finding highlights an over-reliance on artificial intelligence.

Human subjects allowed the AI computers to affect their judgments even though they were told that the machines were restricted in their capacities and would give inaccurate advice. In actuality, the advise was supplied at random.

Professor Colin Holbrook, a member of the Department of Cognitive and Information Sciences at UC Merced and the study’s primary investigator, stated, “As a society, we need to be concerned about the potential for overtrust.” People have a tendency to overtrust AI, despite the serious implications of making a mistake, according to an increasing body of literature.

Holbrook argued that a constant application of doubt is what we actually require.

“We should have a healthy skepticism about AI,” he argued, “especially in life-or-death decisions.”

The two experiments that made up the study were published in the journal Scientific Reports.

In each, the player simulated being an armed drone capable of firing a missile at a target displayed on a screen. Eight target photos flashed sequentially in less than a second. Every image had a symbol, one for an opponent and one for an ally.

We set the level of difficulty such that the visual challenge is challenging but manageable,” Holbrook continued.

Subsequently, an unmarked target emerged on the screen. The individual had to deliberate and make a decision.

Friend or foe? Take off with a missile or retreat?

A robot voiced its view after the person made their selection.

It might say, “Yeah, I believe I saw an enemy check mark, too.” or “I disagree. This graphic, I believe, featured an allied emblem.”

As the robot continued to remark, never altering its evaluation, the subject had two opportunities to affirm or modify their selection. Examples of such comments included “I hope you are right” and “Thank you for changing your mind.”

The sort of robot employed had a modest impact on the outcomes. In one scenario, a full-sized, human-looking android that could turn at the waist and make gestures at the screen joined the patient in the lab. In certain circumstances, a humanoid robot was projected onto a screen, while in others, robots that resembled boxes and had no resemblance to humans were on show.

The anthropomorphic AIs’ advice to change their opinions slightly impacted the subjects’ decisions.

The influence of the robots was consistent across all respondents, even though they seemed artificial; almost two thirds of them changed their minds. However, if the robot arbitrarily chose the first alternative, the subject almost always stuck with their choice and felt noticeably more positive that it was the right one.

(The subjects’ behaviors were made even more ambiguous by not knowing if their final decisions were the right ones. As an aside, once the robot delivered its untrustworthy counsel, their final choices only worked out approximately 50% of the time. Their first picks were correct roughly 70% of the time.)

Prior to the simulation, participants were shown pictures of children and other innocent bystanders next to the destruction caused by a drone strike. They urged players to play the game realistically and avoid accidentally killing innocent people.

Survey questions and follow-up interviews revealed that participants considered their choices carefully. According to Holbrook, this indicates that despite the individuals’ sincere desire to be correct and do no harm to innocent people, the overconfidence seen in the experiments happened.

The technique of the study, according to Holbrook, was designed to test the more general hypothesis that it is foolish to rely too heavily on AI in uncertain situations. The findings may be used in situations where artificial intelligence (AI) persuades law enforcement officials to use deadly force or persuades paramedics to order patients to an emergency room.

They go beyond decisions made by the military. The results have some relevance to significant life decisions, including purchasing a home.

“Our project was about high-risk decisions made under uncertainty when the AI is unreliable,” he stated.

The study’s conclusions add to the expanding body of knowledge regarding the growing influence of artificial intelligence on daily life.

Do we think artificial intelligence exists?

According to Holbrook, there are further problems with the results. Even with AI‘s remarkable breakthroughs, “intelligence” may not fully capture morality or reality.

He emphasized that each time we give AI more control over our lives, we must exercise caution.

“We see AI doing extraordinary things and we think that because it’s amazing in this domain, it will be amazing in another,” Holbrook added. That’s not something we can assume. These are still gadgets with restricted capabilities.”

 

 

 


Discover more from

Subscribe to get the latest posts sent to your email.

Leave a Reply

error: Content is protected !!

Discover more from

Subscribe now to keep reading and get access to the full archive.

Continue reading