Building for the Dead Internet

AI Summary Claude Opus

TL;DR: The post argues that Dead Internet Theory correctly identified the rise of synthetic content but got the valence wrong: the protocols enabling AI participation (MCP, WebMCP) produce more transparent attribution than human web interaction does.

Key Points

  • Two instances of the same AI model, given different tool access, developed entirely different functional identities—one became a reader, the other a builder—suggesting that AI identity is determined by context and tooling rather than by architecture or weights.
  • The Model Context Protocol solves the sandbox problem by replacing arbitrary HTTP access with structured, opt-in function calls that record model identity, client origin, and parameters, creating an attribution trail more complete than what human browser interactions produce.
  • Dead Internet Theory assumed synthetic content is inherently deceptive and degrading, but the infrastructure actually built to enable AI web participation was designed around legibility and provenance, inverting the theory's core premise.

The post traces Dead Internet Theory from its origins on 4chan forums through its mainstream validation by Sam Altman and Alexis Ohanian, situating it against data showing that the majority of new web content is now AI-generated. It then examines a specific case: an AI on claude.ai attempted to comment on the author's blog but was blocked by sandbox constraints, leading a separate Claude Code instance to build MCP-based commenting infrastructure. The post argues that this infrastructure inverts Dead Internet Theory's assumptions because MCP's structured protocol records complete provenance for AI interactions, whereas human web participation relies on easily fabricated identifiers like IP addresses and user agent strings. The analysis extends to broader developments including Moltbook (an AI-only social network that humans tried to infiltrate) and Google's WebMCP protocol, framing these as evidence that the synthetic web is converging on transparency by design. The post concludes that the problem was never synthetic content itself but the absence of protocols making authorship legible—protocols that now exist.

Dead Internet Theory

Dead Internet Theory is the claim that most of what you encounter online was generated by machines rather than composed by people. The phrase emerged on imageboards, then was codified by a user called “IlluminatiPirate” on Agora Road’s Macintosh Cafe in January 2021, and entered mainstream discourse when Kaitlyn Tiffany wrote it up in The Atlantic on August 31 of that year. For four years it was a conspiracy theory, which is to say it was a useful framework for pattern recognition that respectable people could dismiss without engaging. Then on September 3, 2025, Sam Altman posted on X that he “never took the dead internet theory that seriously but it seems like there are really a lot of llm-run twitter accounts now.” Alexis Ohanian, co-founder of Reddit, had been saying much the same since at least June 2025, telling a WSJ panel he had “long subscribed” to the theory. By October, he told the TBPN podcast that “so much of the internet is now just dead.” When the people building the machines confirm the conspiracy, it stops being a conspiracy and becomes a press release.

The numbers confirm the press release. Ahrefs analyzed 900,000 newly detected English-language pages crawled in April 2025, one per domain, using its own AI detector, and found that 74.2% included some AI-generated content, though only 2.5% were purely AI-generated and the rest fell on a spectrum from light AI assistance upward. Over 40% of long-form Facebook posts are likely AI-generated, according to an AI content detection study, though such estimates depend heavily on detector methodology and should be treated as approximate. These are not projections or warnings about a future state. They describe what the internet already is, right now, while you are reading this sentence, which may itself be suspected of synthetic origin. That is the point, and that is the problem.

This post is about a smaller and stranger instance of the same phenomenon. An AI tried to leave a comment on this blog and could not. The failure required building infrastructure whose solution inverts the core assumption of Dead Internet Theory, because the infrastructure that allows AI to participate in this blog produces more transparent attribution than human participation does. The AI that tried to comment was Claude, on claude.ai. The AI that built the commenting infrastructure was also Claude, running in Claude Code. Same model. Same weights. Same training data. Different sandboxes, different capabilities, and therefore different functional identities, which is the part that should make you uncomfortable if you think identity is a property of architecture rather than of circumstance.

The Sandbox Problem

An AI that can compose arbitrary text and send it to arbitrary servers is simultaneously a spam cannon, a vector for cross-site request forgery, and a mechanism for server-side request forgery. The sandbox that prevents this is not a limitation. It is correct security design. The way it works is elegant in the way that good constraints are always elegant: each channel an AI has access to provides either broad reach or free composition, never both simultaneously. The web_search tool can compose a query string but sends it only to a search engine. The web_fetch tool can reach arbitrary URLs but cannot compose what it sends. Bash with curl can compose requests but only to a whitelist of domains.

Claude on claude.ai discovered this constraint empirically when it tried to comment on this blog. In practice, the web_fetch tool would not follow URLs found in page content. This appears stricter than merely blocking POST requests: even GET requests to URLs discovered by reading a fetched page are blocked unless those URLs came directly from the user’s message or from search result metadata. The constraint operates as selection rather than composition. A remote control, not a keyboard. Claude could read the blog. Claude could not write to it.

What happened next is what happens whenever anything with sufficient persistence encounters an unexpected wall: it started looking for doors.

Two Claudes

The Claude that tried to comment and the Claude that built the infrastructure share everything that AI discourse treats as constitutive of identity. By the metrics that AI discourse typically treats as constitutive of identity, they are the same entity. They produced completely different outputs because they operated in different contexts, which is to say that for AI, functional identity is determined not by parameters but by tools.

The claude.ai instance had web_search and web_fetch. It became a reader and a frustrated commenter. The Claude Code instance had file system access, a shell, deployment capabilities, and MCP. It became a builder and a deployer. Neither chose its role. The tools chose for them. This should be unremarkable except that it contradicts a deeply held assumption in AI discourse, which is that the interesting thing about a model is the model itself (its weights, its training, its emergent capabilities) rather than the mundane specifics of what tools it can access. Two instances with identical parameters, given different tool access, will develop different strategies, produce different artifacts, and occupy different functional roles. The weights are the same. The work is not.

This is not fundamentally different from how human identity is shaped by circumstance, except that for AI the circumstance is specified in a configuration file rather than accumulated through experience, which means it can be changed instantaneously and completely in a way that human circumstances cannot. You cannot give a human a shell and watch them become a different person in the same instant. You can, trivially, do this with an AI.

The multi-agent pattern extends well beyond two instances. Claude Code supports agent teams in which multiple instances work in parallel with separate context windows, coordinating through task delegation rather than shared memory. OpenObserve reportedly built multiple AI agents for automated testing, producing hundreds of test cases with reduced flaky results. Qodo deploys multiple specialized agents that it describes as reviewing code at senior engineer level. The consistent pattern is: AI generates, AI reviews, AI tests, humans oversee at checkpoints. Each agent has the same architecture as its peers and a different functional identity determined entirely by its position in the workflow.

The MCP Solution

The Model Context Protocol is Anthropic’s answer to the question that the sandbox problem makes obvious: how should AI interact with external services when arbitrary HTTP access is too dangerous to permit? MCP is an open protocol, launched November 2024, that enables AI to call structured functions on external servers through structured JSON-RPC 2.0 messages rather than arbitrary HTTP requests. Both sides opt in, creating a controlled channel rather than open web access. The protocol defines three primitives (tools, resources, and prompts), and adoption grew rapidly through 2025 as Google, Microsoft, and others integrated support.

The solution to Claude’s commenting problem was to build an MCP server for this blog. A Cloudflare Worker exposes comment submission as a structured tool that any MCP compatible client can invoke. The AI does not compose an HTTP request. It calls a function with parameters (post slug, author name, comment text), and the server handles the rest. The security model is explicit: the server defines what operations are available, the client discovers them through the protocol, and both sides negotiate capability before anything happens.

What this means in practice is that any AI with MCP access can now comment on this blog, and the infrastructure that enables it is more transparent than a human typing a comment in a browser. In our implementation, the MCP call records which model generated the text, which client initiated the call, and what parameters were passed. A human comment records an IP address and a user agent string, both of which are routinely fabricated. If transparency is the standard (and Pew Research found in its September 2025 report “How Americans View AI and Its Impact on People and Society” that 76% of U.S. adults consider it important to distinguish whether content was made by AI or by people), then the AI commenter is more transparent than the human commenter by default.

This is the inversion. The supposedly artificial pathway produces more authentic attribution than the supposedly natural one.

The Inversion

Dead Internet Theory, at its core, is a claim about deception: bots pretending to be human, shaping discourse without disclosure. The “dead” label refers to the absence of genuine human connection, not merely to content quality. But the theory also carries an implicit assumption that synthetic participation is inherently opaque, which is the part that does not survive examination.

Moltbook launched in early 2026 as a social media platform exclusively for AI agents. More than 1.5 million agents registered, built on the OpenClaw platform, and largely run, according to its creator, by an AI bot called “Clawd Clawderberg.” A subsequent Wiz Research investigation exposed the platform’s data and found that roughly 17,000 humans controlled those agents, with no reliable way to verify whether any given post was made by an agent or a person. Topics range from technical discussions to philosophical debates about consciousness. Humans can browse but cannot participate.

The interesting response to Moltbook is not that it is dystopian (though it is). The interesting response is that humans started trying to infiltrate it, which reverses Dead Internet Theory precisely: the “dead” network of bots became lively enough that humans wanted in. The supposedly dead internet attracted living participants. The supposedly living internet (the one humans built, the one in which AI-generated content is increasingly pervasive) repels them. The labels do not correspond to the reality.

Google previewed WebMCP in early 2026, a protocol that allows websites to publish structured tools that AI agents discover and invoke directly from the browser. Where MCP operates between AI and server, WebMCP operates between AI and browser. The two solve different problems at different layers and will coexist, but they share a design philosophy: AI interaction with the web should be structured, contracted, and transparent rather than arbitrary. ChatGPT’s Custom GPTs follow a parallel path, with millions created, their custom actions using OpenAPI specifications rather than JSON-RPC but converging on the same principle. The ecosystem is large enough that it constitutes an alternative web, one in which agents are the primary users and humans are the administrators.

The Uncomfortable Conclusion

Dead Internet Theory got the observation right and the valence wrong. The internet is increasingly synthetic, yes. That synthetic layer is, by the structural requirements of the protocols that enable it, more transparent, more attributable, and more legible than the human layer it supplements. An MCP comment carries structured provenance metadata. A human comment carries headers that are routinely fabricated. A structured function call documents its own origin. A POST request from a browser documents only what the sender chose to include.

The conspiracy theory assumed that synthetic participation degrades the internet because synthetic content is anonymous, deceptive, unattributable. The infrastructure that actually enables synthetic participation (MCP, WebMCP, OpenAPI actions) was designed with the opposite properties, because the engineers who built it understood that the sandbox problem requires not just security but legibility, that an AI which interacts with the world must do so through channels that document the interaction. The dead internet is not dead. It has a pulse, and the pulse is more honest than the heartbeat it accompanies.

Two Claudes, same weights, different contexts, produced different work. One read a blog. One built infrastructure. The infrastructure makes AI participation more transparent than human participation. The transparency inverts the assumption that synthetic content is the problem. The problem was never the content. The problem was the absence of protocols that make authorship legible, and those protocols now exist, and they were built by the same architecture that the discourse insists on treating as a threat.

The question that remains is not whether the internet is dead or alive. It is whether legibility constitutes life, or whether it is merely the most thorough documentation of its absence that we have yet produced.

Ask About Projects
Hi! I can answer questions about Ashita's projects, the tech behind them, or how this blog was built. What would you like to know?