OpenAI CEO Sam Altman recently noted on X (formerly Twitter) in his characteristic lowercase style that he “never took the dead internet theory that seriously,” but observed a significant increase in accounts operated by large language models (LLMs).
Altman, who leads the company behind ChatGPT, the widely used AI text generator, encountered ironic responses on the platform. One user, for instance, mimicked ChatGPT’s often obsequious language, replying, “You’re absolutely right! This observation isn’t just smart—it shows you’re operating on a higher level.”
Altman’s comment referenced a concept that gained traction from a 2021 post on the Agora Road’s Macintosh Cafe forum. This theory proposed that the internet, once a lively human domain, had become inert, dominated solely by bots. IlluminatiPirate, the anonymous originator of the idea, stated then that “The Internet feels empty and devoid of people,” lamenting the loss of genuine human interaction and claiming the internet had been “hijacked by a powerful few.”
A conspiracy theory
Before ChatGPT’s release in 2022, the notion of an internet controlled by robots seemed implausible in 2021, as did the accompanying claim that “the U.S. government is engaging in an artificial intelligence powered gaslighting of the entire world population.” *The Atlantic* even published an article on the subject, titled “The ‘Dead-Internet Theory’ Is Wrong but Feels True.”
Although bots—automated scripts used by search engines to rank websites and by platforms for social media content—were present online, they were incapable of producing credible content at that time.
Adam Aleksic, a linguist and author of *Algospeak: How Social Media is Transforming the Future of Language*, commented to TIME that “We didn’t have AI working at that scale where you actually really could have believable AI accounts running the internet.” He added that what was once considered “a lunatic fringe conspiracy theory… is looking a lot more real” now.
Death of the internet
The internet’s content creation business model traditionally relies on advertisers compensating creators based on audience engagement, enabling creators to produce more content and users to consume what they enjoy. However, recent years have seen a shift, with human viewership becoming less essential.
A March report from ad analysis firm Adalytics revealed millions of instances since 2020 where advertisements for major brands, including Pfizer and the NYPD, were displayed to web-crawling bots instead of actual users, diminishing the value of ad spending. In some notably ironic cases, Google’s ad server delivered these ads to Google’s own bots. Furthermore, data from a cybersecurity company indicates that bot traffic has steadily increased over the past decade, reaching 51 percent in 2024—the first time it has exceeded human internet traffic.
While some internet entities were bots, they were largely passive until OpenAI, led by Sam Altman, initiated the generative AI surge in 2022. Since then, the volume of AI-generated content has dramatically increased. According to Originality AI, a company specializing in AI content detection, the proportion of websites in Google’s top-20 search results featuring AI-generated content has surged by 400% since ChatGPT’s introduction.
Aleksic commented, “It is in the business interest of platforms to cram slop down our throats, because over time, if there’s more AI accounts, they have to pay human creators less.”
Search engines, including Google, started offering AI-generated summaries of online articles. This allowed users to access content overviews directly from the search results, bypassing the need to visit original creators’ websites. The resulting reduction in clicks translated into decreased advertising revenue for content creators.
With AI-generated content becoming more sophisticated, its influence has extended beyond social media. In August, *Dispatch* reported that articles attributed to “Margaux Blanchard” in *Wired* and at least five other publications were removed after it was discovered the author was an AI. This development also offers a new, efficient method for creative scammers to generate profit.
Consequently, IlluminatiPirate’s concept of a digital landscape dominated by bots appears increasingly realistic.
The human cost
The proliferation of low-quality “slop” content also poses challenges for AI developers. Large Language Models (LLMs) such as ChatGPT rely on internet data for training. Should AI-generated summaries continue to redirect revenue from original content creators, the supply of high-quality training data could diminish, leaving model developers with insufficient resources. A 2024 paper revealed that AI models experience a “collapse” when trained on data they themselves have produced.
To address this, some cloud providers, like Cloudflare, have suggested restricting access to their hosted websites, requiring bots to pay for entry. Such measures could help creators recover essential revenue for content production. Cloudflare CEO Matthew Prince shared his perspective with TIME in an interview, stating, “My utopian vision is a world where humans get content for free, and robots have to pay a ton for it.”
The implications extend beyond the internet itself. Humans, much like large language models, absorb information from what they read. Aleksic noted, “It’s not just that bots are surrounding us. It’s that we are starting to become more like the bots.” In July, *Scientific American* reported an increase in the usage of ChatGPT-common words, such as “delve” and “meticulous,” in conversational podcasts following the product’s 2022 launch. Altman also remarked on Monday that “real people have picked up quirks of LLM-speak.”
While linguistic evolution isn’t inherently negative, online algorithms already “represent reality differently than it actually exists” by favoring extreme content from human users. AI-generated content could further detach collective reality by diminishing the human input that grounds online discussions in authentic perspectives.
Aleksic concluded, “We have a growing perception gap in America where people think that other people’s views are more extreme than they actually are. It’s AI psychosis on a mass scale.”