The ‘Dead Internet’ Theory Gains Traction as AI Floods Online Content
Surge in bot activity, AI-generated articles and ad fraud revive a once-fringe idea that the web is increasingly populated by machines, threatening creators, search quality and shared reality.

The idea that much of the internet is increasingly run by machines — once dismissed as a fringe conspiracy known as the “dead internet” theory — has taken on renewed plausibility as automated traffic and large language model output surge across the web.
Tech industry leaders and recent research point to rising bot activity, a growing share of AI-generated content in search results, and advertising systems that in some cases pay to reach automated crawlers rather than real users. The combination has renewed concerns that platforms and publishers may be funneling attention, revenue and the raw material for future AI systems away from human creators.
The phrase “dead internet” was popularized in 2021 by a post on the forum Agora Road’s Macintosh Cafe, where a pseudonymous user wrote that “the Internet feels empty and devoid of people” and suggested a handful of powerful actors had hijacked online discourse. At the time, mainstream coverage treated the idea as unlikely: automated scripts and search-engine crawlers were real, but did not produce the kind of credible, conversational content that large language models would later make possible.
That changed after the public release of ChatGPT in late 2022. Security firm Imperva reported in 2024 that bots accounted for 51 percent of global internet traffic, the first time automated traffic exceeded human traffic. Originality AI, which builds tools to detect machine-generated text, said the share of websites in Google’s top-20 search results that contain AI-generated content rose roughly 400 percent after ChatGPT’s debut. An ad-analysis firm, Adalytics, found millions of instances dating to at least 2020 where ad impressions were served to bots rather than real people — sometimes, in a quirk of scale, Google’s ad server serving ads to Google’s own crawlers.
Those shifts have practical consequences for creators and platform economics. Advertising revenue, which historically supported independent writers, photographers and video producers, depends on human attention. When search engines present AI summaries of articles or users can obtain transient answers without clicking through to original reporting, fewer pageviews reduce income for publishers and individual creators. Scholars and industry analysts warn that less human-created material could, over time, degrade the datasets used to train future models.
“We didn't have AI working at that scale where you actually really could have believable AI accounts running the internet,” linguist Adam Aleksic, author of Algospeak: How Social Media is Transforming the Future of Language, said of the earlier skepticism. “It is in the business interest of platforms to cram slop down our throats, because over time, if there's more AI accounts, they have to pay human creators less.”
The problem is not limited to advertising. Instances of AI-generated bylines have begun to appear in established outlets: in one case, stories attributed to a purported author called “Margaux Blanchard” were removed from Wired and other publications after the articles were determined to be AI-created. For malicious actors, machine-generated prose offers new avenues for scams and disinformation that are cheaper and harder to trace than handcrafted content.
The technical implications are equally stark. A 2024 paper in Nature documented how machine-learning models can “collapse” when trained on data that they themselves produced, producing poorer outputs and reinforcing mistakes. That feedback loop raises concerns among developers who rely on high-quality human-created writing, code and images to build and refine large language models.
Some cloud and infrastructure companies have proposed measures to blunt the trend. Cloudflare executives have suggested restricting automated access to hosted sites and requiring bots to pay for entry, an approach the company said could help restore revenue to human creators. “My utopian vision is a world where humans get content for free, and robots have to pay a ton for it,” Cloudflare CEO Matthew Prince said in an earlier interview.
Beyond economics and model performance, researchers and commentators caution about cultural effects. Linguistic shifts long follow technology; certain turns of phrase associated with LLM outputs — words such as “delve” or “meticulous” — have been noted more frequently in podcasts and online posts since the arrival of generative chatbots. Sam Altman, chief executive of OpenAI, acknowledged the resemblance in a posting on X, noting the proliferation of accounts and conversational patterns that echo LLM output. “i never took the dead internet theory that seriously but it seems like there are really a lot of LLM-run twitter accounts now,” he wrote.
The growth of machine-like language among humans carries political implications as well. Algorithms that amplify extreme or attention-grabbing content already distort perceptions of public opinion; if a larger share of the content pool were machine-generated, that distortion could deepen. Aleksic warned of a widening perception gap in which people believe others hold more extreme views than they actually do, a phenomenon he described as “AI psychosis on a mass scale.”
Platforms and policymakers face difficult trade-offs. Search engines and social networks benefit from automated summarization and content curation that keep users on-site, but those same features can siphon traffic away from source creators. Charging automated systems for access, tightening verification for accounts, and improving detection of synthetic text are among the remedies under discussion, but each raises technical and legal questions about access, censorship and the open web.
For now, the evidence suggests the internet is not literally dead, but it is changing in ways that complicate the relationships among creators, platforms, advertisers and AI developers. The mix of human and machine-produced material will determine both the quality of online discourse and the raw data available to train future systems. How companies and regulators respond in the coming months and years will shape whether the web remains a space anchored in human experience or becomes increasingly mediated by machine-generated content.