AI news and technology updates AI tools for content creators My page - topic 1, topic 2, topic 3 Postbox Live

Using Gen AI to promote misinformation

Using Generative AI to Promote Misinformation

 

CSDI Report Warns: Democratic Elections Are at Global Risk


Discover how generative AI threatens democratic elections through misinformation, deepfakes, and information pollution. Explore CSDI’s latest findings and regulatory solutions.

 

 

A New Threat to Democracy

Generative AI (Gen AI) is rapidly evolving and becoming a disruptive force across many aspects of life including democratic elections. According to a new report by the Centre for the Study of Democratic Institutions (CSDI), the misuse of Gen AI technologies now poses an immediate threat to election integrity around the world.

This report, titled “Harmful Hallucinations: Generative AI and Elections”, dives deep into how these technologies are being used to create convincing fake content, manipulate public opinion, and undermine democratic systems.

How Generative AI Is Undermining Trust in Elections

The CSDI report highlights that Gen AI, while not a completely new phenomenon, has now become more dangerous due to its wide accessibility. In fact, producing deceptive content like deepfakes and fake news is easier than ever before.

Lower Costs, Higher Threats

“Generative AI technologies lower the cost of producing deceptive content,” explains Chris Tenove, assistant director of CSDI. He adds that AI has not created entirely new threats but has significantly amplified existing ones.

Major Risks: Three Core Areas of Concern

The report categorizes the threats posed by Gen AI into three key areas:

1. Deception: Deepfakes and Misinformation

Deception is perhaps the most alarming aspect of Gen AI’s misuse. Deepfakes highly realistic AI-generated images or videos are now used to manipulate voters.

Global Examples

  • In the United States, a deepfake robocall imitated President Joe Biden, urging voters to skip the primaries.

  • In India, fake videos of Bollywood stars criticizing Prime Minister Narendra Modi circulated before the 2024 General Elections.

By the time these deepfakes were flagged, they had already misled thousands. The ease of creating such content has eroded the public’s trust in political communication.

2. Harassment: Targeting Political Figures

The CSDI report also sheds light on how Gen AI is being used to harass political figures especially women.

Disturbing Incidents

  • In the UK, over 400 AI-generated images of female politicians were found on a fake adult website.

  • In India, companies received requests to create explicit deepfakes of politicians from all parties.

According to Netheena Mathews, such incidents cause long-term psychological harm and damage political participation, especially among women.

3. Information Pollution: AI Flooding the News

Perhaps the most far-reaching consequence is how Gen AI is polluting the information ecosystem.

Misleading Chatbots and Overload

  • AI tools like Microsoft’s CoPilot gave incorrect election data 33% of the time during the European Union elections in 2024.

  • Chatbots programmed to deliver factual information instead spread misinformation or mixed facts with errors.

As a result, people now question even truthful content, suspecting it might be AI-generated.

Nishtha Gupta points out, “Genuine offenders can now deny true content, claiming it’s a deepfake.” This further muddies public perception and distorts the truth.

Can Generative AI Have a Positive Role in Democracy?

Not all uses of Gen AI are harmful. The report does acknowledge a few promising applications.

Constructive Examples

  • Bhashini, an AI initiative by the Indian government, allowed PM Modi to communicate in various regional languages.

  • AI tools are also helping to summarize policy documents, translate political speeches, and moderate online debates.

These examples show that AI can support democratic engagement when used ethically and transparently.

Should Governments Regulate Gen AI in Elections?

Opinions among experts differ when it comes to how and when to regulate Gen AI in the context of elections.

A Balanced Approach

Chris Tenove urges caution:

“Regulations should be designed to protect free speech while ensuring fair participation in elections.”

He warns against rushing to draft new laws that may be exploited by those already in power. Instead, he supports modifying existing laws to better reflect today’s digital challenges.

Mathews and Gupta, however, stress the need for forward-thinking legislation. They argue that enforcement of existing rules is currently weak, especially in rapidly digitizing democracies like India.

Recommendations: A Multi-Stakeholder Solution

The report concludes with a call for collaborative solutions. Combating misinformation fueled by Gen AI will require:

  • Stronger cooperation between AI companies, journalists, and governments

  • Transparency in AI systems and tools

  • Accountability for those who misuse AI technologies

  • Public awareness campaigns to help citizens identify fake content

AI Is Not the Root But It Is the Accelerator

The misuse of Gen AI is not the origin of misinformation, but it is certainly speeding it up. As Gupta puts it,

“AI is just the latest software update to an existing problem.”

The challenge now is to adapt democratic systems to deal with this new reality without sacrificing freedom of expression or stifling innovation.

  • Gen AI is lowering the cost of misinformation and making fake content more convincing.

  • It is being used to deceive, harass, and flood information spaces with falsehoods.

  • Deepfakes and AI-generated propaganda have already impacted elections in the US, India, and EU.

  • Regulation must be balanced and thoughtfully crafted to avoid abuse.

  • A united effort is needed to defend democracy against AI-powered threats.


#GenerativeAI, #ElectionSecurity, #Deepfakes, #AIMisinformation, #AIRegulation, #Democracy, #CSReport, #DigitalElections, #IndiaElections2024, #USElections2024, #FakeNews,