I Asked Claude to Make Me a Blog: Agentic Coding and the Three-Tier Result

AI Summary Claude Opus

TL;DR: A blog built entirely through agentic coding—where the AI selected the architecture, wrote the code, and deployed the result—reveals that the real bottleneck in web development was never implementation skill but architectural judgment and editorial curation.

Key Points

  • The three-tier architecture (raw HTML, Astro, Next.js) serving identical content isolates abstraction cost as a variable, demonstrating that Astro delivers roughly 40% faster loads and 90% less JavaScript than Next.js for equivalent static content.
  • Agentic development inverts the traditional specification-execution relationship: the human defines goals and reviews proposals while the system determines implementation, shifting the critical skill from syntax fluency to architectural judgment.
  • Existing legal frameworks for authorship—from the U.S. Copyright Office's human-contribution requirement to China's process-creativity standard—share an increasingly unsustainable assumption that creative contribution maps neatly onto the human-machine distinction.

The post documents the construction of a three-tier blog (raw HTML, Astro, and Next.js) built primarily through natural language instruction to Claude Code, Anthropic's agentic coding environment. The three-tier structure functions as a controlled comparison isolating the cost of each abstraction layer, with performance data showing significant differences in JavaScript payload and load times across equivalent static content. The author examines the authorship implications of a workflow in which the AI system independently selected the hosting platform, directory structure, deployment pipeline, and design architecture, arguing that existing copyright frameworks inadequately address cases where human contribution is curatorial rather than generative. The post concludes that agentic development represents not merely an acceleration of existing workflows but a structural replacement, revealing that the historical bottleneck in web development was decision-making about what to build rather than the mechanical act of building it.

I Asked Claude to Make Me a Blog

Agentic coding is the delegation of software development to an autonomous system that reads codebases, plans implementations, and executes workflows through natural language instruction rather than manual specification. Claude Code, Anthropic’s terminal environment for this purpose, operates not as an autocomplete mechanism but as a development partner that receives goals and returns working software.

The distinction matters because it reframes the act of building something: the developer’s role shifts from writing code to defining constraints, reviewing decisions, and approving outputs. This is the context in which a blog was built by asking for one.

The prompt that initiated this project was unrefined to the point of absurdity. It read, in part: “I’d like you to create a plan to make a blog to share our work. I don’t know much about blogs, so you’ll have to explain a lot.” That prompt contains an admission which would have been unusual for a developer five years ago (not knowing much about blogs while attempting to build one from scratch), yet in the current development landscape it functions as a reasonable starting point. The system responded with architecture options, framework comparisons, deployment strategies, and a content plan. Most of it was predictable. One element was not.

The Three Tiers as Epistemological Apparatus

The proposal included an unusual suggestion: build three versions of the same blog at three levels of technical complexity, and let the comparison itself become content. The first tier is plain HTML served as static files, with no build step, no framework, and minimal JavaScript limited to lightweight API calls for dynamic content. The second tier uses Astro, a static site generator that renders pages as pure HTML by default and activates JavaScript only where explicitly requested through its islands architecture. The third tier uses Next.js, a React framework that defaults to Server Components in its App Router but still carries the weight of client-side hydration, routing infrastructure, and runtime overhead for any interactive functionality.

All three tiers serve the same core content, though Next.js supports the most interactive functionality. All three connect to the same API. All three share a visual identity derived from a single set of design tokens. The divergence is entirely architectural, which means the comparison isolates abstraction cost as a variable. When the same post loads noticeably faster on the raw tier than on Astro, and faster on Astro than on Next.js (as observed in informal browser testing during development), the difference is not content but machinery. The question this arrangement answers is not “which framework is best” but “what does each layer of abstraction actually cost, and what does it purchase?”

The performance data is instructive. Astro delivers substantially faster load times and significantly less JavaScript than Next.js for equivalent static content. These numbers reflect a fundamental architectural distinction: Astro’s selective hydration sends no JavaScript unless a component explicitly requires it, whereas Next.js, despite defaulting to Server Components that render on the server and add nothing to the client bundle, still ships hydration logic, routing infrastructure, and runtime code for any page that includes interactive elements. The raw HTML tier, naturally, outperforms both, because there is nothing to outperform when the abstraction layer is absent.

This is not an argument against complexity. Next.js powers the richest interactive features (a playground canvas, client-side state for dynamic tools) that require the kind of client-side state management the other tiers do not attempt. The point is that complexity should be visible and chosen rather than inherited by default. The three tiers make this legibility possible by forcing each abstraction to justify itself against the tier below it.

What the Machine Decided

The more revealing aspect of the project is not what was built but what was decided without human intervention. Claude Code selected the hosting platform (Cloudflare Pages, with Workers for API endpoints and D1 for edge database storage). It chose the directory structure, the deployment pipeline, the content management approach (markdown files in a shared directory consumed differently by each tier), and the design token architecture. It wrote the deployment scripts, configured the DNS records, and set up the CI/CD workflow, all verifiable in the session logs from the build process.

Each of these decisions was reviewable. Each was, in fact, reviewed. But the decisions themselves originated with the system, which means the development process inverted the traditional relationship between specification and execution. In conventional software development, a human specifies what to build and how to build it, and the tools execute that specification. In agentic development, the human specifies what to build and the system determines how, presenting its reasoning for approval. The shift is from writing blueprints to reading them.

Cloudflare’s infrastructure deserves specific mention because it exemplifies the kind of decision an agentic system makes well. D1, Cloudflare’s serverless database, uses SQLite semantics with opt-in read replication across edge locations, which means data can be distributed closer to users for lower latency reads when the developer enables that capability. Cloudflare’s acquisition of The Astro Technology Company in January 2026 signals a strategic investment in exactly this kind of edge architecture, though Astro the framework remains open source and deployable to any platform. The system chose this stack not because it was told to but because the constraints (free tier hosting, global distribution, minimal operational overhead, support for both static and dynamic content) converged on it.

The Authorship Problem

Here the argument becomes uncomfortable. If the system selected the architecture, wrote the code, made the design decisions, and deployed the result, the question of authorship is not rhetorical. It is a genuine legal and philosophical problem that existing frameworks handle poorly.

The United States Copyright Office has established that AI output alone does not qualify for copyright protection, having canceled the original registration for the graphic novel Zarya of the Dawn and reissued a narrower one that excluded the AI-generated images while retaining protection for the human-authored text and the selection and arrangement of visual elements. In China, lower courts have moved toward recognizing copyright in AI-generated content where humans demonstrate sufficient creativity in the production process, most notably in the Beijing Internet Court’s 2023 ruling in Li v. Liu and what secondary legal sources describe as a confirming decision from the Changshu People’s Court in 2025, though these remain individual rulings rather than settled national policy. The European Union requires what its Court of Justice terms “the author’s own intellectual creation,” a standard developed for traditional works that presupposes human creative input at the level of expressive form, though the CJEU has not yet ruled directly on AI-generated works.

These frameworks share an assumption that is becoming increasingly difficult to sustain: that creative contribution maps neatly onto the distinction between human and machine. When a developer writes a detailed specification and an AI implements it, the human contribution is clear. When a developer writes “make me a blog” and reviews the output, the contribution is curatorial rather than generative. What might emerge as a legal standard for curatorial contribution (meaningful editing or intervention over the generated material, as opposed to mere activation of an automated tool) is precisely the kind of distinction that dissolves under examination, because the line between “meaningful editing” and “approval of satisfactory output” is not a bright one.

This blog exists in that ambiguity. The human contribution was the decision to build it, the conversational constraints that shaped its architecture, the editorial judgment applied to its content, and the ongoing curation of what it publishes. The machine contribution was everything else. Whether “everything else” constitutes the substantial creative work or the mechanical execution depends on which theory of authorship one finds persuasive, and the honest answer is that none of the available theories handles this case cleanly.

Agentic Development as Paradigm Shift

The conventional framing of AI assisted development treats the technology as an accelerant: developers write code faster, debug more efficiently, and produce more output per unit of time. This framing is not wrong, but it obscures the structural change. In this project, agentic development did not accelerate the existing workflow so much as it replaced the workflow with a different one. The cycle of writing code, running tests, reading errors, and fixing bugs gives way to a cycle of defining goals, reviewing proposals, and approving implementations. The skill that matters shifts from syntax fluency to judgment about architecture, from knowing how to implement a feature to knowing whether a proposed implementation is sound.

This has consequences that extend beyond individual productivity. AI appears to be substantially democratizing website creation by removing much of the need for technical knowledge. Platforms like Wix ADI, Framer AI, and similar tools generate entire sites from prompts, handle optimization automatically, and adapt layouts in real time. The three tier blog described here was built on that same principle, though with more architectural ambition than a template generator provides. The result is that the barrier to publication is now almost entirely a matter of having something to say, which was always the real barrier, though it was previously obscured by the technical one.

The implication is worth stating directly. If the bottleneck in web development was never really the code (which is to say, if the hard part was always the decisions about what to build and why rather than the mechanical act of building it), then agentic systems have not eliminated a bottleneck so much as they have revealed that the bottleneck was elsewhere all along. The developers who thrived on implementation skill face a shifting landscape where architectural judgment may matter more than it used to, and the non-developers who were excluded by implementation difficulty face an environment that no longer excludes them in the same way. Both of these shifts appear to be underway. Neither is comfortable for everyone involved.

The Recursive Problem

This post is itself an artifact of the process it describes. It was written with the assistance of the same system that built the blog it discusses, which means the authorship question applies recursively. The research was conducted by AI agents searching the web and synthesizing sources. The voice was shaped by a style guide derived from analysis of prior writing samples, emphasizing patterns like sentence length, vocabulary register, and structural habits. The argument structure follows conventions common to analytical and academic writing, encoded as stylistic constraints in the system prompt.

At what point does curation become authorship? The question has no stable answer, which is precisely why it is worth asking rather than resolving. The instinct to claim either full ownership (“I directed the work, therefore I am the author”) or full disclaimer (“the AI wrote it, I just approved”) is an instinct toward false clarity. The reality is muddled, and the muddled reality is more honest than either clean narrative.

What remains after the authorship question is set aside, not because it is resolved but because practical reality does not wait for philosophical consensus, is a working blog at three levels of complexity, deployed globally on edge infrastructure, maintained by a single person whose primary contribution was not writing code but deciding what should be built and judging whether the result met that standard. That this is sufficient to produce a functional, performant, architecturally sound website suggests that much of what the industry built over two decades of professionalization was necessary given the tools available at the time but is no longer necessary given the tools available now. The uncomfortable conclusion is that the complexity was real and justified in its context, and that the context has changed enough to make much of it abstracted away from the practitioner.

Ask About Projects
Hi! I can answer questions about Ashita's projects, the tech behind them, or how this blog was built. What would you like to know?