My page - topic 1, topic 2, topic 3 Postbox Live

AI Charges Journalist of Abusing Children and Widows

No One Wants Apple Using Screenshots.ai Charges Journalist Of Abusing Children And Widows

AI Charges Journalist of Abusing Children and Widows

and Fleeing Psych Ward

 

 

 

 

 

“This seriously violates my human dignity.”

According to The Register, Microsoft’s AI chatbot Copilot mistrusted a reporter, accusing him of some of the crimes he reported. This is just another instance of AI delusions producing inaccurate and harmful information.

The journalist, Martin Bernklau, a German court reporter from Tubingen, said that the AI had labeled him as a charlatan who preys on bereaved widows, an escapee from a mental institution, and a convicted child abuser.

The incident was first reported by Südwestrundfunk, a German public TV station. They claim that the chatbot gave Bernklau his complete name, address, phone number, and a route planner to his residence.

Google Translate was used to translate Bernklau’s German statement to SWR. “This seriously violates my human dignity,” she said.

Not Recourseable

Bernklau discovered the discovery by asking a version of Copilot, integrated into Microsoft’s search engine Bing, about himself, in an attempt to check how his articles were doing online.

The chatbot seemed to have combined Bernkalau’s decades of coverage of criminal trials and mistakenly identified him as the offender.

According to SWR, public prosecutors dismissed Bernklau’s criminal complaint for slander on the grounds that no crime had been committed since no actual person could have been the source of the statements.

“Microsoft promised the data protection officer of the Free State of Bavaria that the fake content would be deleted,” Bernklau stated to the Register. However, that lasted for just three days. Right now, it looks like Copilot has completely blocked my name.  However, throughout the last three months, things have been changing every day or even every hour.”

frequent liar

This is by no means the first time that an AI illusion has been used to malign someone. Last year, Meta’s AI chatbot accused a Stanford AI researcher of being a terrorist.

More recently, Elon Musk’s Grok inadvertently asserted, perhaps as a result of misinterpreting humorous tweets, that an NBA player was responsible for a number of graffiti acts.

These are a part of a larger trend of subpar AIs spreading false information, or even outright disinformation, but these incidents are especially harmful because they single out specific people.

The incident has traumatized Bernklau, he told The Register. He continued, describing it as a “mixture of shock, horror, and disbelieving laughter.” “It was too crazy, too unbelievable, but also too threatening.”

It’s unclear at this time if Microsoft can be held legally responsible for the things that its chatbot says. Sustained legal disputes may establish a precedent. For example, a guy sued OpenAI because ChatGPT erroneously accused him of embezzling money. However, Bernklau’s options are limited for the time being.

 

 


Discover more from

Subscribe to get the latest posts sent to your email.

Leave a Reply

error: Content is protected !!

Discover more from

Subscribe now to keep reading and get access to the full archive.

Continue reading