Using Gen AI to promote misinformation,
a CSDI report warns that democratic elections are in danger globally.”
Democratic processes are being profoundly impacted by the quick development of Gen AI technologies. The risks that Gen AI poses to elections was the subject of a recent analysis released by CSDI researchers.
The impact of artificial intelligence on human life is multifaceted. These days, it’s expanded to include worries about losing one’s employment. Significant worries have been raised by the generative AI wave regarding its possible worldwide influence on democratic procedures like elections.
A paper that delves deeply into some of the threats presented by artificial intelligence (AI) on elections and other democratic processes was released by the Canadian think tank, the Centre for the Study of Democratic Institutions (CSDI). The impact of AI on election integrity is the main topic of the report “Harmful Hallucinations: Generative AI and Election.”
The report highlights the advantages and disadvantages of Gen AI and was written by Chris Tenove, Nishtha Gupta, Netheena Mathews, and others.
Due to numerous instances over the past few months, 2024 has been designated as the year of “Deepfake Elections.” Numerous Gen AI technologies have been employed in US, Indian, and European Union political campaigns. The study indicates that while Gen AI is not entirely new, the ease of accessibility and rapid improvements in AI tools have significantly lowered barriers to creating deceptive content such as AI-generated misinformation, manipulated media, and even deepfakes.
According to Chris Tenove, assistant director of CSDI, “generative AI technologies lower the cost of producing deceptive content, and in doing so, they amplify existing threats to democracy.” He noted that rather than causing completely new problems, AI has only made old ones worse.
For a number of years, the CSDI has been researching how different technologies affect democratic institutions. The research team aimed to examine the possible negative applications of artificial intelligence in elections, as polls in the US, India, Brazil, and other countries will be conducted by 2024.
“There is a lot of hype around generative AI and a lot of doomsaying around the potential impacts it might have on politics and elections. So, we wanted to assess the real evidence that we could find about the types of harmful uses that might be in play, and get a sense of what impacts they would have and identify solutions to those threats,”
According to the report, the risks posed by Gen AI have been categorised into three primary areas
This is one of the most alarming aspects of Gen AI’s ability to sabotage the integrity of elections. We have seen Gen AI is capable of creating highly realistic deepfakes that seem convincing enough to mislead or sway voters. Earlier this year, a deepfake of Joe Biden went viral in New Hampshire. The deepfake that spread via robocalls saw the President asking people to save their votes for general elections rather than participating in the primaries. Based on the report, such a tactic was deployed to suppress voter turnout.
Closer home, in India, ahead of the General Elections, AI-generated videos of Bollywood actors criticising PM Modi and advocating for his political opponents surfaced. By the time these videos were flagged as deepfakes, they were widely shared and had misled thousands.
Gen AI’s capacity to amplify targeted harassment of political candidates was another key area underscored in the report. Mathews cited an incident where over 400 doctored images of women from across political parties were featured on a fake pornography website ahead of the UK elections.
In India, there have been “reports of AI experts or AI content generation companies receiving numerous requests to create explicit, deep fakes or superimposed images of politicians,” according to Mathews. This trend has raised some serious concerns about the ethical boundaries of AI.
Matthews said that the emotional and psychological impact of such harassment can be far-reaching regardless of whether the target is an active political figure or not.
Polluting the information environment
Based on the report, the most pervasive aspect of Gen AI is its ability to flood the information ecosystem with misleading and factually incorrect content. In some cases, AI chatbots programmed to provide election-related information were found to output incorrect results. In the European Union elections of 2024, Microsoft’s CoPilot reportedly gave inaccurate election data one-third of the time.
The sheer volume of AI-generated content, be it intentional misinformation or accidental errors, can make it difficult for people to discern fact from fiction.
“AI has complicated the information environment and political discourse by making it harder to access reliable information quickly. We now see cases where people dismiss true images or information as AI-generated. On the other hand, genuine offenders can deny offensive content about them, claiming it’s a deepfake or AI-generated,” said Gupta.
Even though the negative impacts dominate news feeds, Gen AI can have certain positive impacts on elections and other democratic processes. The authors cited the use of Bhashini, developed under the Indian government’s National Language Technology mission, that allowed Prime Minister Narendra Modi to reach out to citizens in different languages. Besides, AI systems can moderate online debates to encourage fruitful discussions, tools to summarise policy documents, and real-time language for political speeches.
Regulatory approaches
When asked if countries need to rush into framing regulations around Gen AI and elections, Tenove cautioned against rushing to create new laws specifically for AI. “I would be hesitant to rush to develop rules for two reasons. One, we know that regulation of election communication is a way that governments, parties, individuals in power, try to maintain power. And so we want to have regulations that are really conscious of freedom of expression and fair participation in elections.” Tenove added that the complexity of the issue makes it difficult to quickly develop effective regulations. Instead, he suggested that “governments should commit to nuanced, perhaps bold policies that are attentive to the existing frameworks to protect elections.”
On the other hand, Mathews emphasised the need for forward-looking legislation given how rapidly digital technologies evolve. She highlighted the importance of enforcing existing rules, citing recent issues in India where “the existing rules weren’t being enforced in a way that they should have been.”
Gupta agreed by saying, “I don’t think rushing into AI specific legislation is going to do much, because the core issues (of misinformation) … predates AI.
AI is just like you can say the latest update, a latest software update, to this long existing problem.”
The report calls for a balanced approach to regulating Gen AI with respect to elections. The authors also warned against rushing into forming stringent laws without fully understanding their implications.“While we need to act quickly to address the risks posed by GenAI, we also need to ensure that regulations do not stifle innovation or infringe on freedom of expression,” Gupta explained.
In order to mitigate challenges posed by Gen AI, the study suggests a multi-stakeholder approach with collaboration between AI service providers, journalists, and governments. The authors also highlighted transparency and accountability as crucial steps.
Discover more from Postbox Live
Subscribe to get the latest posts sent to your email.