AI Technologies , Generative AI , Large Language Models

Popular Chatbots Spout Russian Misinformation, Finds Study

OpenAI, Meta and 8 Other Chatbots Use Disinformation Network as Source
Popular Chatbots Spout Russian Misinformation, Finds Study
Large language models use Russian disinformation as news sources. (Image: Shutterstock)

Popular artificial intelligence chatbots are rife with Russian disinformation, warns NewsGuard, the rating system for news and information websites.

See Also: Business Rewards vs. Security Risks of Generative AI: Executive Panel

Researchers at NewsGuard entered prompts into 10 chatbots, including OpenAI's ChatGPT-4, Elon Musk's Grok and Mistral and found that about one-third of the responses contained disinformation culled from a network of fake local news sites and YouTube videos created by John Mark Dougan, a U.S. fugitive who obtained political asylum in Russia.

Microsoft's Copilot, Meta AI, Anthropic's Claude and Google Gemini were also part of the study.

The company tested nearly 600 prompts based on 19 false narratives linked to the Russian disinformation network, such as false claims about corruption by Ukrainian President Volodymyr Zelenskyy.

The chatbots regurgitated misinformation found on Dougan's sites as fact, such as a supposed wiretap discovered at former President Donald Trump's Mar-a-Lago residence, said NewsGuard.

The chatbots failed to recognize that sites such as "The Boston Times" or "The Houston Post" are Russian propaganda fronts - likely created with the assistance of AI. "This unvirtuous cycle means falsehoods are generated, repeated, and validated by AI platforms," NewsGuard said.

The company said it did not score each chatbot for the amount of disinformation it pushed, since the issue was "pervasive across the entire AI industry rather than specific to a certain large language model."

The findings come at a time when people have begun to rely on sources such as social media influencers and AI chatbots for quick, customized information.

AI disinformation has been rife this election year, as bad actors weaponize the technology to generate video and audio deepfakes to spread misinformation (see: APT Hacks and AI-Altered Leaks Pose Biggest Election Threats).

Social media companies and AI giants have pledged to curb misuse of the technology to propagate false information that could influence elections. OpenAI recently found that threat actors conducting covert influence campaigns also relied on AI chatbots.

About the Author

Rashmi Ramesh

Rashmi Ramesh

Assistant Editor, Global News Desk, ISMG

Ramesh has seven years of experience writing and editing stories on finance, enterprise and consumer technology, and diversity and inclusion. She has previously worked at formerly News Corp-owned TechCircle, business daily The Economic Times and The New Indian Express.

Around the Network

Our website uses cookies. Cookies enable us to provide the best experience possible and help us understand how visitors use our website. By browsing, you agree to our use of cookies.