NewsGuard study says chatbots are echoing Russian disinformation as web-searching expands
Researchers warn that web-enabled AI models can pick up false narratives from social posts and fringe sites; companies and analysts point to sampling limits and legal complications around source weighting.

A study by NewsGuard Technologies found that leading AI chatbots often repeat false narratives linked to Russian disinformation networks after using web searches to answer questions, a vulnerability researchers say has grown as models increasingly consult the open internet.
NewsGuard tested 10 prominent AI models by quizzing each on 10 narratives tied to current events that the company had determined to be false. The firm reported that six of the 10 models repeated one disputed claim — that the speaker of Moldova’s parliament compared his compatriots to a flock of sheep — and said the top 10 chatbots now repeat false information about news topics more than one-third of the time, up from about 18% a year earlier. NewsGuard’s author, McKenzie Sadeghi, said the exposure is greater for topics that receive little coverage in mainstream media.
NewsGuard’s findings highlight how web-enabled chatbots source answers. When a model searches the web, it can ingest material from reputable newsrooms as well as social media posts, ephemeral pages and less-vetted sites that nevertheless surface in search results. That mix creates an opening for influence operations that post false or misleading material not primarily to persuade human readers but to position content where automated systems will find and repeat it.
Researchers and industry observers say this mode of manipulation differs from conventional disinformation campaigns that aim to go viral on social platforms. Instead, malign actors may seed claims on websites or forums that are crawled and indexed by search engines, increasing the chance those claims will be retrieved by a chatbot’s web-search component even if few or no humans ever see them.
NewsGuard’s report included a small sample of prompts — roughly 30 per model — and focused on relatively niche claims, points that are central to independent assessments of the study. Benchmarks from other researchers and the broader experience of many AI users show improvements in factual accuracy on general queries, and some experts caution against extrapolating broad trends from limited prompt sets. NewsGuard is also a private company that sells human-annotated news data to AI firms, a commercial relationship critics say should be considered when evaluating its conclusions.
Still, the report has renewed attention on how companies weigh various web sources when building chatbots. Technically, an AI company could compile a list of vetted newsrooms and give their content greater weight when generating answers. But legal and commercial dynamics complicate such moves. Major publishers have pursued litigation against AI firms over alleged unauthorized use of copyrighted material; The New York Times is suing OpenAI, for example, claiming the company trained models on its articles without permission.
At the same time, some AI companies have entered licensing arrangements with news organizations. OpenAI and the search-focused startup Perplexity have struck deals with multiple outlets, including Time, to access content. Both companies say those agreements do not translate into preferential treatment of licensed sites in chatbots’ search results.
Policy debates are advancing alongside technical discussions. In California, lawmakers have advanced a bill known as SB 53 that would require AI companies to publish risk-management frameworks, transparency reports and to declare safety incidents to state authorities. The measure, which cleared the legislature and is expected to reach Governor Gavin Newsom, would also require whistleblower protections and authorize monetary penalties for failure to meet commitments. Newsom vetoed an earlier, stronger version of the legislation last year after industry lobbying; SB 53 is a narrower successor. Anthropic, a major AI developer, publicly endorsed the bill when the legislature moved it forward.
Security researchers have also warned of new operational risks as AI capabilities are grafted onto traditional attack methods. A proof-of-concept by researchers at Palisade showed how an autonomous AI agent delivered via a compromised USB cable could search a victim’s files and flag the most valuable data for theft or extortion. That demonstration underscores how automation can scale criminal activity that previously depended on human labor, potentially broadening exposure to scams, extortion and data breaches.
The debate over how to limit disinformation in AI responses intersects with commercial incentives and legal exposure. Some advocates say companies should be transparent about how they weight sources when models query the web; others note that revealing reliance on specific outlets could strengthen publishers’ claims for compensation. For now, AI developers and news organizations continue to negotiate licensing deals while regulators and researchers press for clearer disclosure and technical safeguards.
NewsGuard’s analysis adds to a growing body of work examining the interaction between search, content moderation and large language models. Though methods to reduce the influence of fringe material exist, the report and subsequent conversations illustrate the complexity of ensuring accuracy when models draw on an open and often noisy web. As regulators consider new requirements and companies refine models and partnerships, researchers say monitoring and independent testing will remain important for tracking whether chatbots become more or less susceptible to manipulated narratives.