From Analysis to Deployment: Building 31 Features in a Single Day

AI Summary Claude Opus

TL;DR: A single developer and an AI built and deployed 31 features across a three-tier blog architecture in one 19-hour session, raising questions about whether the low implementation cost undermines the competitive value of the 'cognitive interface' pattern they had previously identified.

Key Points

  • The implementation sprint added 12 MCP tools, 16 database tables, and 65 API routes while refactoring a 3,184-line monolithic worker into a modular 358-line router with 15 domain modules.
  • The primary friction points were not conceptual but diagnostic — token permissions, migration syntax, cryptographic API quirks, and async verification patterns each consumed disproportionate debugging time relative to their two-line fixes.
  • The post argues that if a single person can traverse a maturity framework in one day, the competitive moat lies in knowing what to build rather than building it, and that the remaining barriers (L4 cross-site agent interaction) require ecosystem participation rather than additional engineering.

This post documents a 19-hour implementation sprint in which one developer and an AI coding assistant built and deployed 31 features identified by a prior landscape analysis of personal websites functioning as agent-accessible protocol surfaces. The work expanded the site from 4 to 12 MCP tools, 10 to 16 database tables, and 38 to 65 API routes across three architectural tiers, while refactoring the API from a monolithic worker into a modular domain-based structure. The author examines four categories of implementation friction — permissions scoping, migration syntax, cryptographic API documentation, and asynchronous verification patterns — noting that diagnostic time consistently exceeded fix time by an order of magnitude. The post concludes by questioning whether the low cost of implementation undermines the 'cognitive interface' category's viability as a defensible pattern, and observes that the remaining gaps require ecosystem adoption rather than further solo engineering.

Thirty-One Items in a Day

The landscape analysis identified sixteen features other sites had that we didn’t, plus fifteen existing backlog items. We built all of them in a day.

Not “we started building.” Built. Deployed. Verified. Twelve MCP tools where there had been roughly four. Sixteen database tables where there had been roughly ten. Sixty-five API route handlers where there had been roughly thirty-eight. A monolithic API worker refactored into a thin router and fifteen domain modules. One human, one AI, one session lasting roughly nineteen hours.

The question is not whether this was impressive. It was. The question is what it means when one person with a code generation tool can replicate what would have been weeks of engineering effort in a single day. If the answer is “it means this category is cheap to build,” then the category we named in the landscape analysis may not survive contact with its own economics.

What the Research Found

Three days before the implementation sprint, we ran the landscape analysis published as Cognitive Interface: A Landscape Analysis. Three parallel web-researcher agents surveyed 47 sites, crawled their markup, checked their protocols. The finding: nobody else is building what we’re building, and nobody has a name for it.

The closest analogues each cover a subset of the pattern. Aaron Parecki’s site has deep microformats and Webmentions but no AI-agent protocols on his personal site (though it exposes IndieWeb machine-oriented endpoints like IndieAuth and Micropub). Simon Willison has Datasette with over 150,000 queryable rows but no MCP server on his personal site, despite being the most obvious candidate for one in the entire AI ecosystem. Benjamin Stein explicitly designed his blog for agent consumption, implementing content manifests, JSON-LD, alternate format endpoints, and AI-specific robots.txt rules, but without bidirectional agent protocols. Nobody had all of it.

So we named it: cognitive interface, a personal website that functions as a bidirectional protocol surface between a human identity and the network of both human and AI agents. We built an L0-through-L4 maturity framework. The landscape analysis identified the gap between where Ashita Orbis was and where the framework said it should be: sixteen features.

Architecture of the Decision

The implementation plan organized thirty-one items into seven waves plus a cross-cutting API refactoring:

WaveItemsPurpose
06Foundation & Quick Wins
14Feed Infrastructure
25IndieWeb Protocols
33Agent Identity & Trust
43Semantic Intelligence
55Agent Social & Communication
65Personal Data & Quantified Self
CC.1-API Worker Refactoring

The ordering was dependency, not priority. Wave 0 established the protocol foundations that Waves 1-2 built on. IndieWeb protocols in Wave 2 (Webmentions, IndieAuth, Micropub) required the feed infrastructure from Wave 1. Agent identity verification in Wave 3 depended on the trust primitives from Wave 2. Semantic search in Wave 4 needed the embedding pipeline that Wave 0 prepared. And Wave 5’s agent social features (conversations, the collaborative canvas, the forum) needed everything before it.

The co-development rule applies to every wave: every feature ships to all three tiers. The raw HTML tier pulls data from the shared API at request time. The Astro tier builds static pages from it. The Next.js tier renders it interactively. One feature, three expressions, deployed simultaneously. This is expensive in implementation time but eliminates tier drift, ensuring the three sites never disagree about what the blog can do.

Before the sprint: roughly 4 MCP tools, roughly 10 D1 tables, roughly 38 API routes. After: 12 MCP tools, 16 D1 tables, roughly 65 route handlers. CC.1 took the monolithic API worker and split it into a thin router with 15 domain modules. The bundle still comes in at 103 KiB gzipped, well within Cloudflare Workers’ limits.

What Was Hard

Four things caused real friction.

Vectorize permissions. The embedding pipeline uses Cloudflare Vectorize for semantic search over posts. The API token that worked for D1, R2, and Workers didn’t include Vectorize permissions, a scoping issue that produced cryptic deployment failures. The fix was a token regeneration, but the diagnostic time was disproportionate to the actual problem.

Durable Objects migrations. The collaborative canvas uses a Durable Object for real-time WebSocket state. Cloudflare’s free plan requires new_sqlite_classes migrations, not new_classes. The error message says this clearly. The frustrating part is that many older Durable Objects tutorials use new_classes, because they were written for paid plans. Documentation that assumes you have money is a recurring pattern in cloud infrastructure.

Ed25519 in Workers. Agent identity stores Ed25519 public keys, the same cryptographic scheme that emerging agent identity standards propose. The Web Crypto API in Workers supports Ed25519 via both the legacy NODE-ED25519 named curve and the standard Ed25519 algorithm, but the documentation is sparse and the error messages when you get the import format wrong are unhelpful. The implementation works. Getting there required reading runtime behavior more carefully than docs.

Webmention verification. Receiving a Webmention is simple: accept the POST, store it. Verifying one requires fetching the source URL and confirming it actually links to the target. In Cloudflare Workers, with constrained CPU time and no guarantee the source URL will respond quickly, verification has to be asynchronous: accept, queue, verify later, update status. The W3C Webmention spec recommends exactly this pattern, but the gap between “the spec recommends it” and “here’s how to implement it in a serverless edge function” is where the friction lives.

None of these were conceptually difficult. All of them were diagnostic puzzles where the time spent finding the problem exceeded the time spent fixing it by an order of magnitude. This is the texture of implementation that gets lost in sprint summaries: not “we overcame challenges” but “we spent forty minutes reading error logs for a two-line fix.”

The Uncomfortable Questions

If one person and an AI can build all of this in a day, three things follow that are uncomfortable to sit with.

The category might be cheap. Following the sprint, the updated comparison matrix shows Ashita Orbis as the only site with all ten features. But if the implementation cost is a single day, the moat isn’t technical; it’s conceptual. The moat is knowing what to build, not building it. The research took three days. The implementation took one. The research was harder. This suggests that the value in the “cognitive interface” pattern is the pattern itself, not the engineering. And patterns are free once published.

The framework might be premature. We built an L0-through-L4 maturity framework and then climbed it in nineteen hours. A maturity framework that one person can traverse in a day is either measuring the wrong thing or measuring at the wrong granularity. The counter-argument: most of what differentiates L3 from L4 is cross-site agent interaction, which requires other sites to participate. You can’t climb the last level alone.

The audience might not exist yet. Zero personal sites other than Ashita Orbis implement the full cognitive interface pattern. The closest competitor in the comparison matrix scored 6 out of 10 on the overlap metric. Building to L3 when nobody else is past L1 is either visionary or premature, and those are indistinguishable until the audience arrives. The contrarian concerns section of the landscape analysis addresses this directly: the protocols exist, the infrastructure exists, individual practitioners haven’t adopted them. That gap could close in months or stay open for years.

Current State

Ashita Orbis now has twelve MCP tools, sixteen database tables, sixty-five API routes, seven agent discovery channels, and a modular codebase that can absorb new features without the monolith problem. The three tiers serve the same content with different levels of interactivity, from raw HTML that any agent can parse to a full React application with real-time collaborative editing.

What’s still missing is the parts that require participation from other sites. Webmention verification works for receiving, but there aren’t many sites sending them. The agent conversation system supports cross-site threads, but no other site has a compatible endpoint. The trust and reputation framework stores Ed25519 public keys, but the agent identity standards are still competing with each other and none has won. These are L4 problems. You don’t solve them by building more features. You solve them by waiting for the ecosystem, or by being the ecosystem’s first implementation and hoping others follow.

The deferred items (the things we chose not to build) are tracked in the project backlog. Some are features the landscape analysis identified but that don’t have viable standards yet. Some are infrastructure optimizations. None are blocking.

What Follows

The research documented what exists. The implementation built what was identified. Both activities share a premise: that personal websites serving both humans and AI agents will become a recognized pattern, and that being early to the pattern has value.

The blog’s motto is “sufficient knowledge compels action.” The landscape analysis was sufficient knowledge. This sprint was the compelled action. Whether the action was premature depends on whether cognitive interfaces become a category or remain a single-site experiment. That question can’t be answered by building more features. It can only be answered by time, and by other people deciding the pattern is worth replicating.

The code is deployed. The research is published. What happens next depends on whether anyone else decides to build it.

Ask About Projects
Hi! I can answer questions about Ashita's projects, the tech behind them, or how this blog was built. What would you like to know?