AI Industry Innovations , AI Technologies , Generative AI
Generative AI Concerns Grow in Record Election Year
Davos Forum Highlights Worries About AI-Enabled MisinformationArtificial intelligence-enabled voter misinformation campaigns and voter database hacking are some of the largest threats to election security in a year when more than half of the world's populace will take to the ballot box in elections ranging from free to flawed.
See Also: Securing the Data & AI Landscape with DSPM and DDR
More than 45 countries are set to cast votes this year, including the United States, the European Union and likely the United Kingdom. The Economist calculated that elections will affect more than 4 billion individuals, while the World Economic Forum assessed they will determine leadership in countries that produce more than half of the world's gross domestic product.
A survey published Thursday by the WEF - the organization behind an annual talkathon held each January in Davos, Switzerland - found that 72% of the respondents say they are actively integrating current events into cyber risk management. One-third of CISOs surveyed separately say they are increasing the use of threat intelligence reports since global events have the potential to cause macro-impacts that affect enterprises.
The report warns that threat actors may exploit developments in AI to run large-scale, automated disinformation campaigns that are difficult to detect.
Organized campaigns spreading disinformation through social media could cast doubt on the integrity of elections and plant false information ahead of balloting. The threat isn't theoretical, the report says. A September 2023 election in Slovakia was marred when a deepfake audio conversation between the head of the country's main social-liberal political party and a journalist putatively discussing vote buying was distributed on social media. Fact checkers from the AFP determined that the recording, posted just 48 hours before voting began, appeared to be an artificial voice trained on the voices of real people.
Threat actors may also use the developments in technology to generate deepfake videos and to micro-target political ad campaigns.
The report also warns about targeted hacking of voter databases. While generative AI adds to the complexity of cyberdefense, so does the sharp growth in malware families and variants over the past five years, the report says.
Only months ago, a government agency overseeing elections in the U.S. capital said "some D.C. voter information was accessed through a breach" of its website hosting provider.
Similar concerns over election security have been raised by the European Union Agency for Cybersecurity, which in November warned that EU elections slated for June 2024 are at risk of automated disinformation campaigns using deep fakes tied to Russian, Iranian and Chinese nation-state groups.
In anticipation of potential election disruption, the EU agency held a joint exercise in November to assess its preparedness for potential cybersecurity attacks (see: Election Integrity Fears in Europe Provoke Joint Exercise).
The WEF report, which is based on responses from 120 executives, says that nearly 60% of the respondents believe AI is likely to give more advantages to cyberthreat actors than to defenders.
The respondents say they are most concerned about the increase in AI-generated phishing and malware campaigns, AI data leaks, threats that emerge from the software supply chain used to develop the technology, and legal concerns relating to intellectual property and copyright.