My page - topic 1, topic 2, topic 3 Postbox Live

Will people accept lying robots?

Will People Accept Lying Robots According To Scientists, It Depends On The Falsehood

Will people accept lying robots?

According to scientists, it depends on the falsehood.

 

Most of the time, honesty is the best policy. Social norms aid in human understanding of when it is appropriate to speak the truth and when it is not, in order to protect others’ feelings or prevent harm. However, since robots are increasingly collaborating with humans, how do these standards apply to them? In order to determine if people can tolerate lying from robots, researchers asked nearly 500 participants to rank and explain various forms of robot deception.

Andres Rosero, a Ph.D. candidate at George Mason University and the study’s lead author, said, “I wanted to explore an understudied facet of robot ethics, to contribute to our understanding of mistrust towards emerging technologies and their developers.” The study was published in Frontiers in Robotics and AI.

Three distinct categories of falsehoods

The healthcare, cleaning, and retail industries are the three sectors where robots are already employed, and the scientists selected three scenarios that reflected three different false tendencies.

These were superficial state deceptions, in which a robot’s design exaggerates its capabilities, covert state deceptions, in which a robot‘s design conceals its capabilities, and external state deceptions, in which lies about the world outside the robot.

In the external state deception scenario, a robot tending to an Alzheimer’s patient informs her that her late spouse will soon be coming home. In the hidden state deception scenario, a woman enters a residence to witness a robot clean, without realizing that the robot cleaner is also filming. Finally, in the superficial state deception scenario, a robot pretending to be in pain while carrying furniture in a shop as part of a research on human–robot relations causes a human to ask someone else to take the robot’s place.

How intricately we weave our webs

498 volunteers were enlisted by the scientists, who gave them a questionnaire to complete after they read one of the scenarios. Participants were questioned about the robot’s behavior, including whether they agreed with it, how misleading it was, whether it was acceptable, and whether anybody else was accountable for it.

The researchers carefully examined and analyzed these answers in order to find any recurring themes.

The participants determined that the housecleaning robot with the concealed camera was the most dishonest, as they disapproved of most of the state’s veiled deceit. The superficial state deception, in which a robot appeared to be in agony, was rejected more than the exterior state deception, which they deemed to be only somewhat dishonest.

This could have been seen as deceptive.

The majority of the external state deception, in which the robot misled a patient, was accepted by the participants.

They defended the actions of the robot by claiming that they spared the patient needless suffering and put other people’s sentimental safety ahead of the truth.

The machine’s ghost

 Most participants said that the hidden state deception could not be justified, even though they could defend all three of the deceptions.

For example, several individuals suggested the housecleaning robot might film for security purposes. In a similar vein, almost 50% of individuals who responded to the state deceit on the surface felt it was not justified. Participants tended to hold robot owners or developers responsible for these undesirable deceptions, particularly hidden state deceptions.

“I think we should be concerned about any technology that is capable of withholding the true nature of its capabilities, because it could lead to users being manipulated by that technology in ways the user (and perhaps the developer) never intended,” said Rosero.

Some organizations have already employed chatbots with artificial intelligence and web design concepts to force customers to perform certain behaviors. Regulation is required to protect us from these harmful falsehoods.”

The researchers did, however, caution that further study is necessary to more accurately replicate real-life emotions using short roleplays or recordings.

“The benefit of using a cross-sectional study with vignettes is that we can obtain a large number of participant attitudes and perceptions in a cost-controlled manner,” noted Rosero. “Vignette studies offer preliminary results that can be verified or refuted by more research. Studies using real-world or simulated human-robot interactions should shed more light on how people interpret these deceptive robot behaviors.

 

 


Discover more from

Subscribe to get the latest posts sent to your email.

Leave a Reply

error: Content is protected !!

Discover more from

Subscribe now to keep reading and get access to the full archive.

Continue reading