AI Technologies , Generative AI
Erosion of Trust Most Concerning Threat to UK Elections
AI-Led Disinformation Campaign and Deepfakes Are the Biggest Threats, Experts WarnNation-state-led disinformation campaigns that intend to erode public trust are the biggest threat to the upcoming U.K. election, experts told a parliamentary panel on Monday.
See Also: The Duality of AI: Enhancing and Securing Gen AI Models
Speaking to the Joint Committee on the National Security Strategy inquiry into election security, a University of Nottingham academic told the panel that the worst outcome wouldn't necessarily be hackers altering the election's outcome. "Rather, it's the wider impact on the perceived legitimacy of the elections and the trust people have on the outcomes of the election," said Rory Cormac, professor of international relations.
"Hostile actors want us to turn in on each other because once they undermine this trust, it would be hard for the government to regain this," Cormac said, adding that Russian, Iranian and Chinese threat actors are likely to be at the forefront of such campaigns.
Researchers from George Washington University in Washington, D.C., have predicted a summer deluge of election disinformation in a year that's setting records for the number of citizens affected by balloting. The Economist has calculated that elections this year will affect more than 4 billion individuals, and the World Economic Forum reported that these individuals will determine leadership in countries that produce more than half of the world's gross domestic product (see: AI Disinformation Likely a Daily Threat This Election Year).
These threat actors are also likely to co-opt developments in artificial intelligence to scale their operations, Cormac added.
Incidents of disinformation created with artificial intelligence have already been reported. In September 2023, elections in Slovakia were marred by a deepfake audio conversation putatively between the head of the country's main social-liberal political party and a journalist discussing vote-buying. Authorities in the U.S. state of New Hampshire are investigating robocalls mimicking the voice of President Joe Biden, generated by AI, that urged voters to stay home during the January primary. A Democratic consultant working for a long-shot challenger to Biden for the nomination later took responsibility for the calls, telling reporters that he did so to draw attention to the threat.
Social media platforms must take measures such as removing fake accounts and fact-checking news on their platforms, said Pamela San Martín, who serves on the Meta Oversight Board. Governments should also communicate directly with their citizens about the dangers of disinformation, she said.
To hold social media platforms more accountable in the effort to stop disinformation and threats from AI-enabled deepfakes, the U.K. government has implemented the Online Safety Act, which became law in 2023.
The regulation empowers the Office of Communications to shield young users from content that is pornographic or promotes self-harm, and it calls for criminal prosecution of those who send harmful or threatening communications.
Jessica Zucker, director of Online Safety Policy at Ofcom, said the regulator should begin to enforce the law before the elections.
Zucker said the department is working with the government in the interim to develop watermarking for AI content in social media platforms, and it is leading research into datasets used to create deepfakes to develop a code of practice.