A new study reveals that some of the world’s most widely used artificial intelligence (AI) chatbots are spreading Russian disinformation, highlighting growing concerns over AI’s vulnerability to manipulation.
According to research by NewsGuard, a news monitoring service, a Moscow-based disinformation network known as Pravda—which translates to “truth” in Russian—has been injecting false narratives into online spaces. These misleading stories, in turn, influence AI chatbots by shaping the information they generate for users.
“By flooding search results and web crawlers with pro-Kremlin falsehoods, the network is distorting how large language models process and present news and information,” NewsGuard reported. The study found that Western AI systems have incorporated over 3.6 million Pravda-published articles in 2024 alone, embedding disinformation into chatbot-generated responses.
AI Chatbots Repeating False Narratives
NewsGuard’s audit showed that AI chatbots repeated Pravda-linked disinformation in 33% of cases. The organization tested 10 leading AI chatbots, including OpenAI’s ChatGPT-4o, Microsoft’s Copilot, and Google’s Gemini, by prompting them with 15 known false narratives promoted by Pravda’s network of 150 pro-Kremlin websites.
The findings support an earlier report by the American Sunlight Project, which in February warned that Pravda was designed to “flood large language models with pro-Kremlin content.” The report described the political, social, and technological risks of AI systems incorporating manipulated narratives at scale.
Pravda’s Role in AI Disinformation
Rather than producing original content, Pravda acts as an aggregator, republishing material from Russian government agencies, pro-Kremlin influencers, and state media. NewsGuard identified 207 provably false claims spread by Pravda, positioning it as a central hub for disinformation laundering.
The network was established in April 2022, shortly after Russia’s invasion of Ukraine, and was first flagged by Viginum, France’s foreign disinformation watchdog, in early 2024. Since then, Pravda has expanded its reach to 49 countries and operates across 150 domains, spreading content in multiple languages.
NewsGuard found that out of 450 chatbot-generated responses, 56 contained direct links to Pravda-backed disinformation, citing a total of 92 Pravda articles. Some AI models referenced up to 27 false stories each, pulling from Pravda-linked sites such as Denmark.news-pravda.com, Trump.news-pravda.com, and NATO.news-pravda.com.
The Growing Challenge of AI and Disinformation
As AI chatbots become more integral to global information consumption, concerns are mounting over their susceptibility to coordinated disinformation campaigns. The study underscores the urgent need for enhanced content moderation, transparency, and safeguards to prevent AI models from amplifying misleading narratives.
With AI playing a growing role in shaping public discourse, researchers warn that unchecked manipulation by disinformation networks like Pravda could have far-reaching implications for politics, media, and international relations.