My page - topic 1, topic 2, topic 3 Postbox Live

exposed for using fake, AI-generated quotes

Exposed For Using Fake, Ai Generated Quotes

After being exposed for using fake,

AI-generated quotes, a journalist resigned.

 

 


“There were some weird patterns and phrases that were in his reporting.”


Don’t do that if you plan to employ AI to write for you and you report to an editor.

It’s not just irresponsible, but a keen reader can always spot phoniness.
For instance, Aaron Pelczar attempted to conceal the fact that he was utilising artificial intelligence (AI) in his work as a green reporter for the Cody Enterprise in Wyoming. According to the New York Times, he lost his job when his rivals discovered his lie.

Two months after Pelczar began working for the newspaper, on August 2, he tendered his resignation. Investigators found that in addition to writing most of his articles using a large language model, he was also creating whole direct quotes out of thin air.

This constituted an egregious violation of the ethical rules of artificial intelligence-era journalism.

“There were some weird patterns and phrases that were in his reporting,” the New York Times was told by CJ Baker, a staff writer for the rival local paper the Powell Tribune, which broke the news.

Maybe a more cunning con man could have kept up the ruse longer. The fictitious remarks, in Baker’s perspective, read stiffly and sounded more like press release text than anything a real person would say. These contained statements ostensibly from state governors and federal organisations.

Furthermore, quotes typically have a person attached to them. Consequently, Baker told the NYT that he “went back and started checking on quotes that appeared in this reporter’s stories that had not appeared in other publications or press releases or elsewhere on the web,” and discovered seven quotes that did not come from conversations with Pelczar.

After revealing his findings to the Cody Enterprise, Pelczar resigned due to an inquiry carried out by the newspaper.

In the paper’s editorial on Monday, Chris Bacon, the editor of Cody Enterprise, expressed regret.

“I apologise, reader, that AI was allowed to put words that were never spoken into stories,” remarked Bacon.

Though bogus reporting has always been a problem, artificial intelligence (AI) has the potential to make it easier and more alluring than before. If there’s one thing chatbots excel at, it’s producing a lot of text fast and concocting stories with assurance.

However, the temptation doesn’t end with ardent individual reporters; whole newspapers have also been known to use huge language models dishonestly. For instance, we discovered last year that Sports Illustrated had been publishing whole product reviews with fictitious bylines that were written by AI.

Naturally, there is still disagreement on AI’s role in newsrooms. In addition to the existential threat it poses to the industry, its use could damage newspapers’ trust.

According to Alex Mahadevan of the media think tank Poynter Institute, “there’s just no way that these tools can replace journalists,” as stated in a New York Times article. “But in terms of maintaining the trust with the audience, it’s about transparency.”

 

 

#FakeQuotes, #AIGenerated, #QuoteScandal, #TruthMatters, #IntegrityInMedia, #Authenticity, #DigitalDeception, #QuoteVerification, #MediaEthics, #AITransparency, #TrustInContent, #FactCheck, #QuoteAuthenticity, #MisleadingInformation, #SocialMediaIntegrity, #AIAccountability, #ExposeTheFraud, #QuoteFalsification, #EthicalStandards, #DigitalTrust,


Discover more from

Subscribe to get the latest posts sent to your email.

Leave a Reply

error: Content is protected !!

Discover more from

Subscribe now to keep reading and get access to the full archive.

Continue reading