AI Technologies , AI-Based Attacks , Ethics

Generative AI Warnings Contain Their Own Dangers

AI Could Undermine Trust in Democracy, Starting With This Very Statement
Generative AI Warnings Contain Their Own Dangers
Jan. 6 insurrectionists didn't need generative AI to undermine democracy (Image: Shutterstock)

Artificial intelligence holds the potential to undermine trust in democratic politics - but overwrought warnings themselves can erode trust in the system that critics seek to preserve, warns a cybersecurity firm.

See Also: OnDemand: AI Model Security Challenges: Financial and Healthcare Data

The political underworld has already disgorged deepfake audio and video clips, including one of a Chicago mayoral candidate supposedly condoning police aggression and one of a U.S. Democratic senator purportedly claiming that Republicans should be barred from voting.

The quality of AI-generated content is good enough for some academics to worry that unregulated AI will be used to "hacked humans."

Israeli cybersecurity firm Checkpoint says such warnings carry their own danger, given that AI is "still a long way from massively influencing our perception of reality and political discourse."

"What is deteriorating is our trust in the public discourse, and unbalanced warnings might erode this trust further," says a blog post authored by cyber intelligence researcher Yoav Arad Pinkas.

Political scientists envision the possibility of politicians gaining power by using cheap, AI-generated content to target individual voters with personalized messages. "The winner would be the client of the more effective machine," wrote Harvard professors Archon Fung and Lawrence Lessig earlier this year. The outcome would be elections that don't necessarily reflect the will of voters.

"Voters would have been manipulated by the AI rather than freely choosing their political leaders and policies," they wrote.

But it's also possible that citizens react to a rise in falsehoods by insisting more on truth, Pinkas said. "In a reality where it is increasingly challenging to identify forgeries, the source and context become paramount. In such a situation, the credibility of a person and information source becomes increasingly crucial. When the truth is endangered, deterrence is created through a decreased tolerance for lies and deceivers."

Traditional media, with its commitment to accuracy and identity verification that seeks to segregate humans from bots, could also become more important going forward, he said.

Regardless, generative AI still has yet to achieve the level of corruption of the public discourse anticipated by worst-case scenarios. That puts anyone warning about its corrosive effects in the difficult position of having to ensure that current distrust isn't eroded further without proof, Pinkas said.

"Just as the overstated focus on the vulnerabilities of voting machines might have inadvertently weakened democratic resilience by eroding public trust in voting mechanisms, we could be facing a similar peril with AI."


About the Author

Rashmi Ramesh

Rashmi Ramesh

Assistant Editor, Global News Desk, ISMG

Ramesh has seven years of experience writing and editing stories on finance, enterprise and consumer technology, and diversity and inclusion. She has previously worked at formerly News Corp-owned TechCircle, business daily The Economic Times and The New Indian Express.




Around the Network

Our website uses cookies. Cookies enable us to provide the best experience possible and help us understand how visitors use our website. By browsing aitoday.io, you agree to our use of cookies.