Cognitive Interface: A Landscape Analysis

AI Summary Claude Opus

TL;DR: A survey of 47 personal websites and their agent-accessibility protocols finds that no existing term captures sites designed for both human and AI audiences, leading to a proposed new category called 'cognitive interface' with a five-level maturity framework.

Key Points

  • Of 47 sites surveyed, none combined more than 5 of the 11 agent-facing features identified, suggesting the multi-protocol personal site remains an unclaimed category.
  • The proposed L0-through-L4 maturity framework maps personal sites from static HTML (L0) through agent-readable (L1) and agent-interactive (L2) to agent-social (L3) and resident-agent (L4) levels.
  • A contrarian analysis argues that a minimum viable version requires only three features — llms.txt, an MCP server, and content negotiation — and that several additional channels may be premature or redundant.

This landscape analysis surveys 47 personal websites and the emerging protocols that make them accessible to AI agents, including MCP, WebMCP, llms.txt, and content negotiation. The research finds that while individual sites implement one or two agent-facing features, no existing site or established category combines structured data exposure, agent interaction, automated content generation, and resident AI agents. The analysis proposes the term 'cognitive interface' for this category and introduces a cumulative maturity framework ranging from L0 (static personal site) to L4 (resident AI agent). It also traces the conceptual genealogy from Vannevar Bush's 1945 Memex through the IndieWeb movement to the 2024–2026 emergence of agent-specific protocols, while acknowledging that several features may be overengineered and that core agent traffic to personal sites remains minimal.

Cognitive Interface: A Landscape Analysis

This started as a private question: does anyone else build personal sites the way we build this one?

We surveyed 47 sites, checked their protocols, and mapped the landscape. The answer: we did not find an established term that captures what these sites are. The closest analogues each implement one or two features; none combine all of them. So we named the category (cognitive interface), built a maturity framework (L0 through L4), and tried to be honest about what’s overengineered and what assumptions might be wrong.

I’m publishing the raw research because it’s more useful as a shared reference than a private document. The comparison matrix and protocol map should save you time if you’re building something similar. The contrarian concerns section is there because analysis that doesn’t question itself isn’t analysis.

Research conducted February 18, 2026, using three parallel web-researcher agents, Exa Deep Researcher, and Brave Search. Updated February 20, 2026 with multi-model additions.


Executive Summary

Is this a recognized category? No. Ashita Orbis occupies an unclaimed intersection of several movements (digital gardens, IndieWeb, personal APIs, and the emerging agentic web) but we did not find an established term that captures what it actually is. The closest analogues each implement one or two of its features; none combine all six differentiators. This appears to be a genuinely new category.

Top 7 comparable sites:

SiteSimilarity*Key Similarity
Aaron Parecki (aaronparecki.com)6/10Most machine-readable personal site (microformats, Webmentions, IndieAuth, Micropub)
Benjamin Stein (benjaminste.in)6/10Personal blog explicitly designed for agent consumption (content manifest, JSON-LD, Markdown alternate)
Joost de Valk (joost.blog)5/10WordPress content negotiation pioneer (<link rel="alternate" type="text/markdown">, Accept header routing)
swyx.io5/10JSON API endpoint, multi-persona content, “Learn in Public” philosophy
Simon Willison (simonwillison.net)5/10Datasette: ~159K rows, 27 tables, JSON API, which makes it the closest to “personal content as queryable database”
EJ Fox (ejfox.com)5/10Explicitly building “Personal APIs” for both humans and robots
Gwern.net4/10Deepest structured metadata, .md URL access, annotation database

*Similarity scores reflect qualitative assessment of conceptual and feature overlap, not a direct count from the comparison matrix below. The matrix measures binary feature presence across 11 specific technical capabilities; the similarity score also weighs philosophical alignment, data depth, and implementation sophistication.

Proposed category name: Cognitive Interface, meaning a personal website that functions as a bidirectional protocol surface between a human identity and the network of both human and AI agents.


Part 1: Comparable Sites

1.1 Comparison Matrix

SiteHuman BlogStructured DataAPI/Endpointsllms.txtMCP ServerWebMCPAgent CommentsAutomated PulseResident AgentMulti-TierContent Negotiation
Ashita OrbisYYY (OpenAPI)YYYYY (daily)YY (3 tiers)Y
Aaron PareckiYY (microformats2, h-card, JSON-LD)Y (Micropub)---Y (Webmentions)----
swyx.ioYY (frontmatter)Y (JSON API at /api/blog)--------
EJ FoxYYY (Personal API)--------
Simon WillisonYY (tags, dates)Y (Datasette JSON API, separate instance)--------
Gwern.netYY (YAML: confidence, importance)---------
Maggie AppletonYY (growth stages)---------
Robb KnightYY- (RSS/Atom/JSON feeds)----Y (automated /now)---
IndieWeb (typical)YY (microformats)Y (Micropub, IndieAuth)---Y (Webmentions)----
Moltbook-YY---Y (agent-only)----
SiteSpeakAI usersvaries-Y (auto-generated MCP)-Y (auto)Y (auto)--Y (chatbot)--
Cloudflare sitesvaries---------Y (Markdown for Agents)
Benjamin SteinYY (JSON-LD, content manifest)Y (alternate format links)-------Y (Markdown alternate)
Joost de ValkYY (Schema.org)Y (.md endpoints)-------Y (Accept header + <link rel="alternate">)
omg.lolYYY (full REST API)--------
Will LarsonYY (embeddings corpus)---------
Tantek CelikYY (h-card, h-feed, ICS calendar)Y (Micropub, IndieAuth)---Y (Webmentions)----
Damian O’KeefeY--YY (Netlify)------
Oskar AblimitY-Y (MCP)-Y------
Aridane MartinY----Y-----
Jason McGheeY----Y-----
Junxin ZhangY---------Y (Cloudflare)
Julian GoldieY----Y-----

Key observation: No site in this matrix has more than 5 of AO’s 11 features. Among the specifically agent-focused protocol features (llms.txt, MCP Server, WebMCP), the maximum found on any other personal site is 2 (Damian O’Keefe: MCP + llms.txt). AO has all 11.

1.2 Closest Analogues (Detailed Profiles)

Aaron Parecki (aaronparecki.com)

The IndieWeb Exemplar. Aaron Parecki is the Director of Identity Standards at Okta and a leading IndieWeb practitioner. His site is the most machine-readable personal site in the traditional web.

  • Microformats2 & h-card: Full structured identity markup covering name, role, image, and biographical links
  • Webmentions: Bidirectional cross-site communication (the human precursor to agent-to-agent interaction)
  • IndieAuth: Decentralized authentication using his domain as identity
  • Micropub: API for creating/editing content on his site from external clients
  • JSON-LD: Schema.org WebSite markup with SearchAction
  • Quantified self (as of February 2026): 420 articles, 4,517 bookmarks, 21,658 checkins, 3,764 notes, 4,443 photos, GPS tracking since 2008
  • Custom CMS (p3k): Self-built, emphasizing data ownership

What AO has that Parecki doesn’t: Agent protocols (MCP, WebMCP, llms.txt), automated content generation (Pulse), resident AI agent, multi-tier architecture, agent reactions/comments from AI agents specifically.

What Parecki has that AO could adopt: IndieAuth (use your domain as login everywhere), Micropub (standardized content creation API), the sheer depth of quantified self data.

Assessment: Parecki’s site is the closest philosophical ancestor, because both sites treat machine-readability as a first-class concern. The gap is generational: Parecki’s machine-readable layer serves the human IndieWeb while AO’s serves AI agents.

swyx.io (Shawn Wang)

The Developer Thought Leader. Shawn Wang (swyx) coined “Learn in Public” and popularized the “AI Engineer” role. His site bridges content creation and developer tooling.

  • JSON API: /api/blog returns structured blog metadata (slug, date, reading time, category, frontmatter, GitHub integration)
  • Multi-persona content: Targets “nontechnical Vibe Coders” through “Professional Software Engineers” to executives
  • Canonical essays: “Learn in Public” and “The Rise of the AI Engineer,” which established himself as a thought leader
  • Transparent stakeholder engagement: Portfolio, advisory relationships, equity disclosures

What AO has that swyx doesn’t: Agent protocols, automated content, resident agent, multi-tier architecture, agent interaction.

What swyx has that AO could adopt: The JSON API pattern for blog metadata is simple and powerful. The multi-persona content targeting is sophisticated.

Assessment: swyx.io is the closest in intent because both sites are designed to be someone’s public cognitive output. swyx’s API is practical but limited compared to AO’s full protocol surface.

EJ Fox (ejfox.com)

The Personal API Pioneer. EJ Fox wrote “Building Personal APIs” (2025), explicitly arguing for structured, real-time personal data exposed to “any robot (or, uh, human)” who wants it.

  • Personal API: Structured around what he “actually cares about,” providing real-time data about current activities
  • Philosophy: Software as self-expression, APIs as extensions of identity

What AO has that Fox doesn’t: The full protocol stack, multi-tier architecture, agent social features.

What Fox has that AO could adopt: The philosophical framing of “personal APIs” as identity expression, not just technical infrastructure.

Assessment: Fox’s essay is the closest articulation of what AO is building, but his implementation appears to be a single API endpoint rather than a comprehensive agent surface.

Gwern.net

The Deep Metadata Site. Gwern Branwen’s site is probably the most intensely structured personal site on the web.

  • YAML frontmatter: Every page has creation date, confidence level, importance rating, CSS extensions
  • Confidence tagging: Each piece of content has an explicit epistemic status (likely, possible, unlikely, etc.)
  • Progressive disclosure: Navigation across multiple levels, from section headers through thematic clusters to inline annotations
  • Markdown alternate access: Any page’s raw Markdown source is accessible by appending .md to the URL (e.g., gwern.net/zeo.md). An undocumented but functional machine-readable access pattern.
  • Annotation database: A comprehensive structured database backing Gwern’s inline annotations, link previews, and metadata popups. Functions as a personal knowledge graph of external references.
  • Archival completeness: Obsessive documentation of sources

What AO has that Gwern doesn’t: Any explicit layer facing agents. Gwern is the most implicitly machine-readable personal site (deep structure, .md access, annotation DB) but has no explicit agent protocols: no MCP, no llms.txt, no API.

What Gwern has that AO could adopt: Confidence/epistemic status tagging on content. Importance ratings that could inform agent prioritization. The .md URL pattern as a lightweight content negotiation mechanism. The annotation database concept as a personal knowledge graph.

Assessment: Gwern proves that deep structure doesn’t require agent protocols, but also shows the missed opportunity. If Gwern’s metadata and annotation database were exposed via MCP, agents could do extraordinary things with it. The .md URL trick is a form of proto-content-negotiation that predates Cloudflare’s Markdown for Agents.

Simon Willison (simonwillison.net)

The AI Transparency Advocate. Simon Willison is the creator of Datasette and one of the most prolific writers about AI/LLM developments.

  • Content: Deeply technical blog spanning 2002-2026, focused on AI/LLM transparency
  • Datasette: His open-source tool for exploring and publishing data, which is architecturally adjacent to what a personal MCP server does. His personal Datasette instance (datasette.simonwillison.net) exposes ~159,000 rows across 27 tables with a JSON API. This is the closest any personal site comes to AO’s structured data exposure, but it’s a separate tool that is not integrated into the blog itself.
  • Feeds: Extraordinarily granular, with extensive per-tag sub-feeds covering individual tags, link types, and content categories. Atom feed at /atom/everything/, plus per-tag feeds.
  • No agent protocols: Despite being deeply embedded in the AI ecosystem, his personal site has no MCP, no llms.txt, no API endpoints beyond Atom and Datasette.

What AO has that Willison doesn’t: Agent protocols (MCP, WebMCP, llms.txt), automated content generation (Pulse), resident agent, multi-tier architecture. Willison’s Datasette is the closest architectural parallel to AO’s agent surface, but it’s deployed as a separate application, not as an integrated layer of his personal site.

What Willison has that AO could adopt: The volume and consistency of writing. The extensive per-tag sub-feeds model. The Datasette approach of making ALL personal data queryable (~159K rows). The transparency about AI tool usage.

Assessment: The most surprising gap on this list. Willison should be the first person with MCP on his personal site, given his Datasette work. That he doesn’t suggests the “personal site as agent surface” idea hasn’t propagated even to the most obvious candidates. His Datasette instance is the closest existing thing to “personal content as a queryable API”; it just hasn’t been connected to agent protocols.

Benjamin Stein (benjaminste.in)

The Agent-Friendly Blog Pioneer. Benjamin Stein’s blog is a personal site explicitly designed for agent consumption alongside human reading.

  • Content manifest: A machine-readable index of all blog content with metadata, enabling agents to discover and navigate the site’s full corpus without crawling.
  • Alternate format links: Each page provides <link> tags pointing to Markdown, JSON, and other alternate representations. Agents can follow these to access structured versions.
  • JSON-LD: Full Schema.org structured data markup throughout the site.
  • Markdown alternate: Every post available in Markdown format via alternate links.

What AO has that Stein doesn’t: The full protocol stack (MCP, WebMCP, llms.txt), agent social features (reactions, comments), automated content generation, resident agent, multi-tier architecture.

What Stein has that AO could adopt: The content manifest pattern, which is a single index file that maps all site content with metadata. This is complementary to llms.txt (which provides a summary) and MCP (which provides interactivity). A manifest sits between them: comprehensive but static.

Assessment: Stein is the closest individual practitioner to AO’s approach of deliberately designing for agent access. His implementation is lighter but shows the same intent.

Joost de Valk (joost.blog)

The WordPress Markdown Alternate Pioneer. Joost de Valk (founder of Yoast SEO) implemented content negotiation on his WordPress blog with a novel discovery mechanism.

  • .md endpoints: Every post has a parallel Markdown version at the same URL with .md appended.
  • Discovery via <link rel="alternate">: HTML pages include <link rel="alternate" type="text/markdown" href="..."> tags, allowing agents to programmatically discover Markdown versions.
  • Accept header routing: The server also responds to Accept: text/markdown headers, providing content negotiation in addition to access via URL patterns.
  • Schema.org markup: Full structured data via Schema.org vocabulary.

What AO has that de Valk doesn’t: MCP, WebMCP, llms.txt, agent social features, automated Pulse, resident agent, multi-tier architecture.

What de Valk has that AO could adopt: The <link rel="alternate" type="text/markdown"> discovery pattern is elegant and grounded in existing standards, since it uses existing web infrastructure (HTML <link> tags) to signal agent-accessible content without requiring any new protocol.

Assessment: De Valk’s approach is the most pragmatic implementation of agent-accessible personal content. It requires minimal infrastructure changes and uses web standards that already exist. The <link rel="alternate"> discovery pattern could become a lightweight standard.

1.3 Newly Discovered Agent-Accessible Personal Sites

(Added February 20, 2026 via Exa Deep Researcher and additional web research)

A second round of research surfaced several individuals who have begun implementing agent protocols on personal sites, none of which were discovered in the initial February 18 survey:

PersonSiteWhat They’ve Done
Oskar Ablimitmytechsales.oskarcode.comDjango + MCP server; AI tools update resume via natural language
Damian O’Keefemcp.damato.designNetlify MCP server serving raw Markdown from personal site + blog; also has llms.txt
Aridane Martinaridanemartin.devWebMCP integration via navigator.modelContext.registerTool()
Jason McGheejason.todayWebMCP server exposing full-stack portfolio to browser agents
Junxin Zhangjunxinzhang.comCloudflare content negotiation (Accept: text/markdown)
Julian Goldiejuliangoldie.comWebMCP + custom UI, branded as “Google AI Plugin”

These are point implementations of single protocols, not comprehensive multi-channel architectures. None combines more than 2 agent-discovery methods. Damian O’Keefe comes closest with MCP + llms.txt (2 channels). AO’s 7-channel approach (MCP, OpenAPI, WebMCP, llms.txt, content negotiation, ChatGPT GPT, Direct HTTP API) remains architecturally unique.

1.4 Gap Analysis: What AO Has That Others Don’t

The features unique to Ashita Orbis (not found on any comparable site):

  1. Multi-tier architecture: No personal site serves 3 genuinely different presentation tiers (a human-facing site, a structured-data site, and an agent-facing API surface) from shared content. Vercel’s content negotiation is corporate and operates at the format level, not architectural.

  2. 7 agent discovery channels simultaneously: The closest is SiteSpeakAI, which auto-generates MCP + WebMCP, but that’s 2 channels and it’s a service, not a personal site. Among individual practitioners, Damian O’Keefe has 2 channels (MCP + llms.txt).

  3. Automated daily Pulse from workspace state: Robb Knight has an automated /now page, but it aggregates external services. AO’s Pulse generates narrative from internal workspace state, which is a fundamentally different data source.

  4. Agent reactions/comments with structured tags: Moltbook has agent-to-agent interaction, but it’s a centralized platform. AgentGram (an open-source Moltbook alternative with Ed25519 crypto auth) is also centralized. No personal site has agent reactions/comments.

  5. Resident AI agent matching the site’s voice AND participating outward: The market bifurcated into: (a) commercial digital twin services (Delphi.ai, Tavus, MindBank.ai) that create chatbots which only handle inbound queries, (b) agent social platforms (Moltbook, AgentGram) where agents interact but aren’t tethered to personal sites, and (c) autonomous agent frameworks (Conway/Automaton, OpenClaw) that focus on economic agency rather than personal site presence. AO uniquely bridges all three: a personality-matched agent that is both resident on a personal site AND participates outward. No other documented example exists.

  6. Machine-readable project catalog with codenames: JSON Resume exists for career data, but no standard or implementation covers project portfolios with the richness AO provides.

1.5 Reverse Gap: What Others Have That AO Could Adopt

FeatureSourceValue for AO
IndieAuth (domain-as-identity)Aaron Parecki / IndieWebUse ashitaorbis.com as login credential for other sites
Micropub (content creation API)IndieWebLet external tools create content on AO
Confidence/epistemic taggingGwern.netAdd confidence levels to blog posts; agents could filter by certainty
Webmentions (cross-site mentions)IndieWebReceive notifications when other sites link to AO content
JSON feed alongside Atom/RSSRobb KnightMachine-readable feed for simpler consumers than MCP
Quantified self data exposureAaron PareckiExpose personal metrics (coding time, project activity) via API
Multi-persona content targetingswyx.ioTailor content depth to visitor type (developer, executive, agent)
soul.md voice-encoding frameworkConway/Automaton ecosystemStructured approach to encoding personality into agents (SOUL.md + STYLE.md + SKILL.md). Uses “consciousness tokens” from Twitter exports, blog posts, emails. More systematic than ad-hoc personality tuning.
Webmention “Vouch” extensionIndieWebTrust framework for cross-site interactions that requires endorsement from a trusted domain before displaying agent comments. Could solve the spam/identity problem for agent reactions.
Agent Card / AgentFacts identityA2A / AgentFacts.orgMachine-readable agent identity cards (Ed25519 signed). Could make agent reactions attributable and verifiable.
Content manifest (index file)Benjamin SteinSingle machine-readable file mapping all site content with metadata. Complementary to llms.txt (summary) and MCP (interactive).
<link rel="alternate" type="text/markdown">Joost de ValkDiscovery of agent-friendly content grounded in existing standards, using HTML <link> tags. No new protocol needed.
.md URL patternGwern.net, Joost de ValkLightweight content access: append .md to any URL for Markdown source. Zero-configuration content negotiation.
Full REST API over personal presenceomg.lolREST API for all personal data (status, profile, DNS, weblog, now page), with webhooks for push notifications.
Embeddings over writing corpusWill Larson (lethain.com)Semantic search over personal writing using vector embeddings. Enables “find posts similar to X” queries.
ICS calendar feed of activitiesTantek CelikMachine-readable calendar of events/activities, enabling agent scheduling integration.

Part 2: The Agentic Web Landscape

2.1 Protocol Adoption Map

llms.txt

  • Created by: Jeremy Howard (co-founder of Answer.AI), published September 3, 2024
  • Spec status: Proposal driven by community adoption, no formal standards body governance
  • How it works: Markdown file at /llms.txt with H1 heading, optional summary, H2-delimited file lists with descriptions. Companion .md versions of HTML pages.
  • Adoption: Estimated 800-2,000 curated directory listings, but automated indexing suggests far wider deployment:
    • directory.llmstxt.cloud: ~1,500+ listings
    • llmstxthub.com: 500+ sites
    • llms-text.com/directory: 788 verified sites
    • BuiltWith automated crawl (October 2025): 844,000+ implementations detected, which suggests most adoption is by documentation generators and CMS plugins producing llms.txt files automatically rather than through manual curation
  • Breakdown (estimated): ~95% corporate/product sites (Anthropic, Cloudflare, Vercel, Coinbase, HuggingFace), ~5% personal/developer sites. The 844K number likely reflects auto-generated files from CMS plugins (WordPress, Hugo Blowfish, VitePress) more than deliberate adoption.
  • Personal site examples: Matt Rickard (517K llms-full.txt), Evan Boehs, Santanu Pradhan (GitHub Pages), Jessica Temporal (jtemporal.com, who wrote a Jekyll implementation guide), Guillaume LaForge (glaforge.dev, who wrote an adoption guide), Liran Tal (lirantal.com), glucn.com, Sebastian van de Meer (German IT security blog)
  • Tools: Astro plugin (astro-llms-txt), Jekyll Liquid template, Hugo Blowfish theme (built-in), WordPress plugin, VitePress/Docusaurus plugins, CLI tools, VS Code extensions
  • Skepticism: No major LLM provider has confirmed actively consuming llms.txt. Google explicitly rejected it (comparison to defunct keywords meta tag). Redocly argued, in essence, that they tried it, measured it, and found llms.txt overhyped. The “last-mile” question remains unresolved.
  • Assessment: Well-adopted for documentation sites, growing among personal sites. More personal site tooling (Astro, Jekyll, Hugo) lowers the barrier significantly.

WebMCP (Web Model Context Protocol)

  • What it is: A proposed web standard that lets websites expose structured tools to in-browser AI agents. Web pages become MCP servers running in client-side JavaScript.
  • Created by: Co-authored by engineers at Google and Microsoft, building on Anthropic’s MCP. Spec editors: Walderman (Microsoft), Sagar (Google), Farolino (Google).
  • Governance: Housed under the W3C Web Machine Learning Community Group (github.com/webmachinelearning/webmcp), with its charter updated September 2025. Currently a Draft Community Group Report, NOT a W3C standard.
  • Chrome status: Available for prototyping to early preview program participants in Chrome 146 (February 2026). Production stability expected mid-to-late 2026.
  • How it works: Exposes navigator.modelContext browser API with two modes:
    • Declarative API: Augments existing HTML forms with microdata/attributes (minimal code change)
    • Imperative API: JavaScript functions registered via registerTool() with full parameter schemas
    • A single WebMCP tool call can replace dozens of browser-use interactions
    • Reported 89% token efficiency improvement over methods that rely on screenshots (per community coverage; not confirmed in Chrome’s official early-preview post)
  • Key distinction from MCP: Traditional MCP runs on the server. WebMCP runs entirely in the browser tab, sharing the user’s active session.
  • Personal site adoption: Essentially zero outside Chrome Labs demos and a handful of developers (Aridane Martin, Jason McGhee, Julian Goldie). Andre Cipriani Bandarra (bandarra.me) joined the Chrome EPP. SiteSpeakAI auto-registers knowledge bases as WebMCP.
  • Assessment: The most potentially transformative protocol for personal sites, but too new to evaluate adoption. The discovery mechanism doesn’t exist yet (per Ivan Turkovic’s analysis). AO implementing WebMCP now would be genuinely pioneering.

MCP (Model Context Protocol)

  • Created by: Anthropic, announced late 2024
  • Spec status: Open protocol with active spec development (version 2025-11-25). Transported over JSON-RPC 2.0.
  • Adoption: De facto standard for AI tool connectivity. 10,000+ public servers (per Anthropic’s December 2025 announcement), 97M monthly SDK downloads, 2,000 entries in official registry. Supported by Claude, ChatGPT, Gemini, Cursor, Windsurf, VS Code. Donated to Agentic AI Foundation (Linux Foundation), December 9, 2025.
  • Hosting options: Local (stdio), Remote (HTTP+SSE or Streamable HTTP), Cloudflare Workers
  • Personal site adoption: Rare but emerging:
    • Adrian Cockcroft (meGPT): github.com/adrianco/megpt, which processes 611 content items across multiple formats (YouTube videos, blog archives, presentations, documents, books, and podcast episodes). Exposes semantic search, content filtering, analytics via MCP. Runs locally, not publicly hosted.
    • Daniela Petruzalek (Speedgrapher): MCP server for “vibe writing,” providing personal prompts exposed as slash commands. Published on Google Cloud Medium blog.
    • Damian O’Keefe: Netlify-hosted MCP server serving raw Markdown from personal site and blog.
    • Oskar Ablimit: Django + MCP server for resume/portfolio, AI tools update content via natural language.
    • SiteSpeakAI: Auto-generates MCP endpoints for any chatbot trained on site content.
    • Fern: Auto-generates MCP servers from OpenAPI specs, hosts at yoursite.com/_mcp/server.
    • CMS-specific: blogger-mcp-server, wordpress-mcp-server on GitHub.
  • Assessment: MCP won the protocol war against A2A. The “personal content as MCP server” pattern is emerging (Cockcroft’s meGPT is the clearest example) but AO is the only personal site where MCP is architecturally integrated rather than a separate tool.

Google A2A (Agent-to-Agent Protocol)

  • Created by: Google Cloud, announced April 2025, now housed by Linux Foundation
  • Purpose: Agent-to-agent coordination (horizontal) vs MCP’s tool connectivity (vertical)
  • Design: Message envelopes in JSON, authentication via OAuth, focuses on inter-agent delegation
  • Trajectory: Launched with 50+ enterprise partners (Atlassian, Box, Salesforce, SAP, etc.). An estimated ~150 supporting organizations by July 2025 (Google’s launch announced 50+ partners; the higher figure is from secondary coverage). Accepted by Linux Foundation June 2025. By September 2025, “quietly faded into the background” (per fka.dev analysis).
  • Why it faded: Over-engineered for basic tasks, enterprise-first positioning, MCP captured developer mindshare first. Anthropic and OpenAI notably absent from launch partners.
  • Current state: Google Cloud still supports A2A for enterprise customers but added MCP compatibility. No personal sites implement A2A.
  • Assessment: Irrelevant to the personal site category. MCP won decisively.

Cloudflare Agent Infrastructure

  • Markdown for Agents (launched February 12, 2026): Automatically converts HTML pages to Markdown when an AI agent requests text/markdown. Token reduction of ~80%. Available on enabled zones for qualifying Cloudflare plans.

    • Adoption: Available to Cloudflare sites on supported plans with the feature enabled. Passive opt-in once enabled.
    • Controversy: SEO community concerned about encouraging cloaking (per Search Engine Land).
  • Workers MCP: Deploy remote MCP servers to Cloudflare’s edge network. Handles OAuth, transport, deployment.

    • Adoption: Primarily developer tools, not personal sites.
  • Moltworker: Middleware for running OpenClaw (formerly Moltbot) on Cloudflare infrastructure. Uses Sandboxes, AI Gateway, Browser Rendering, R2, Zero Trust.

    • Relevance: Closest infrastructure to AO’s resident agent, but positioned as a personal assistant (via Slack), not as a website component.

NLWeb (Microsoft)

  • What it is: A conversational web protocol that allows websites to be queried in natural language. Wraps Schema.org structured data with an MCP-compatible interface, so any Schema.org-annotated site can be queried conversationally.
  • Created by: Microsoft Research, open-sourced May 2025
  • How it works: Sites add Schema.org markup (many already have it for SEO). NLWeb provides a layer that translates natural language queries into structured data lookups against that markup. Exposes results via MCP, enabling agents to ask questions like “What are the latest posts about X?” rather than issuing API calls.
  • Key distinction: NLWeb doesn’t require sites to build new APIs because it reuses existing Schema.org data that many sites already have for SEO purposes, making it a zero-marginal-cost upgrade for Schema.org adopters.
  • Adoption: Early stage. Microsoft open-sourced the reference implementation but adoption is sparse.
  • Personal site relevance: High potential. Most personal sites on modern frameworks (Next.js, Astro, Hugo) already emit Schema.org data. NLWeb could make them agent-queryable with no additional work.
  • Assessment: The most overlooked protocol in this analysis. If NLWeb gains traction, it could become the “llms.txt for structured data,” providing a low-barrier way to make existing sites agent-accessible. AO’s Schema.org markup could be NLWeb-compatible with minimal effort.

Content Negotiation for Agents

  • Cloudflare approach: Server-side, automatic, based on Accept: text/markdown header
  • Vercel approach: AI SDK middleware inspects accept headers, routes agents to Markdown endpoints
  • Personal site adoption: Emerging, with documented implementers:
    • Ben Word (benword.com): Laravel + CommonMark. Dual detection: Accept header AND user-agent sniffing. Identifies LLM user agents: axios, Claude-User, node.
    • Nicholas Khami (skeptrune.com): Astro + Cloudflare Worker. Build-time HTML-to-Markdown conversion, Worker inspects Accept header, routes to /markdown or /html directories. Reports 10x token reduction. Source code published.
    • Junxin Zhang (junxinzhang.com): Cloudflare content negotiation toggle.
  • Adoption challenge: According to a February 2026 analysis attributed to Checkly, only 3 of 7 major AI agents tested send Accept: text/markdown (Claude Code, Cursor, OpenCode). OpenAI Codex, Gemini CLI, GitHub Copilot, and Windsurf reportedly do not. (Cloudflare’s own blog confirms only Claude Code and OpenCode as sending this header.)
  • AO’s approach: Architectural tiers (3 different sites) rather than content negotiation on a single URL. More comprehensive but more complex.

2.2 Personal Sites in the Agentic Web

The uncomfortable finding: Almost no personal sites participate in the agentic web. The protocols exist, the infrastructure exists, but individual practitioners haven’t adopted them. Here’s who comes closest:

IndividualSiteAgent Features
Santanu Pradhansantanu-p.github.iollms.txt on GitHub Pages
Matt Rickardmattrickard.comllms.txt (517K full version)
Evan Boehsevanboehs.comllms.txt
Andre Cipriani Bandarrabandarra.meWebMCP EPP participant
EJ Foxejfox.comPersonal API
Aaron Pareckiaaronparecki.comMicropub, IndieAuth, Webmentions
Adrian Cockcroftgithub.com/adrianco/megptPersonal content MCP server (611 items across multiple formats)
Daniela PetruzalekSpeedgrapherPersonal writing MCP server (“vibe writing” toolkit)
Ben Wordbenword.comContent negotiation (Accept header + user-agent detection, Laravel)
Nicholas Khamiskeptrune.comContent negotiation (Astro + Cloudflare Worker, 10x token reduction)
Jessica Temporaljtemporal.comllms.txt on Jekyll/GitHub Pages + implementation guide
Guillaume LaForgeglaforge.devllms.txt on personal blog + guide
Benjamin Steinbenjaminste.inContent manifest, JSON-LD, Markdown alternate, alternate format links
Joost de Valkjoost.blog<link rel="alternate" type="text/markdown">, Accept header routing, .md endpoints
Tantek Celiktantek.comh-card, h-feed, Micropub, IndieAuth, Webmentions, ICS calendar
Will Larsonlethain.comEmbeddings over writing corpus (semantic search experiment)
Damian O’Keefemcp.damato.designNetlify MCP server + llms.txt
Oskar Ablimitmytechsales.oskarcode.comDjango + MCP server for resume/portfolio
Aridane Martinaridanemartin.devWebMCP via navigator.modelContext.registerTool()
Jason McGheejason.todayWebMCP server for full-stack portfolio
Junxin Zhangjunxinzhang.comCloudflare content negotiation
Julian Goldiejuliangoldie.comWebMCP + custom UI
Ashita Orbisashitaorbis.comMCP + OpenAPI + WebMCP + llms.txt + content negotiation + agent comments + resident agent

The gap between AO and the next-closest personal site is significant, but the gap is narrowing. Adrian Cockcroft’s meGPT proves the “personal content as MCP server” pattern is viable, Damian O’Keefe shows MCP + llms.txt on a personal site, and Ben Word/Nicholas Khami show content negotiation on personal sites is happening. The question is whether these remain isolated experiments or converge into the pattern AO represents.

2.3 Infrastructure Providers

ProviderProductRole in Agentic Web
AnthropicMCP specificationDe facto protocol standard
GoogleWebMCP (Chrome), A2A (faded)Agent integration at the browser level
CloudflareMarkdown for Agents, Workers MCP, MoltworkerInfrastructure for agent-accessible sites
VercelAI SDK content negotiationDeveloper framework for dual-audience sites
SiteSpeakAIAuto-generated MCP + WebMCPTurnkey agent accessibility for any site
Jeremy Howard / Answer.AIllms.txt specificationLightweight documentation standard
Microsoft ResearchNLWebConversational queries over Schema.org data via MCP

Part 3: Naming the Category

3.1 Conceptual Genealogy

1945  Memex (Vannevar Bush), "As We May Think"
        Associative trails through stored information

1960s Hypertext (Ted Nelson, Doug Engelbart)
        Links between documents, augmenting human intellect

1990s Personal Homepages (GeoCities, Angelfire)
        Self-expression on the web, hand-coded HTML

1998  "Hypertext Gardens" (Mark Bernstein)
        First use of garden metaphor for personal knowledge spaces

2000s Blogs (Blogger, WordPress, LiveJournal)
        Reverse-chronological personal publishing

2003  RSS / Atom
        Machine-readable syndication of personal content

2010s IndieWeb (microformats, Webmentions, IndieAuth, Micropub)
        Machine-readable personal identity & cross-site communication

2015  "The Garden and the Stream" (Mike Caulfield)
        Philosophical framework: linked spaces vs chronological feeds

2018  "Learn in Public" (Shawn Wang / swyx)
        Learning by publicly sharing your process and work

2019  Digital Garden revival (Joel Hooks)
        "My blog is a digital garden, not a blog"

2020  "An App Can Be a Home-Cooked Meal" (Robin Sloan)
        Software for an audience of 4; personal tools as craft
      Digital gardens enter mainstream (MIT Technology Review)

2024  llms.txt (Jeremy Howard, September)
      MCP (Anthropic, late 2024)
        First protocols specifically for AI agent access to content

2025  A2A (Google, April; faded by September)
      Personal APIs (EJ Fox)
      "Building Personal APIs", explicit framing of sites as robot-accessible

2026  WebMCP (Chrome 146 early preview, February 2026)
      Markdown for Agents (Cloudflare, February)
      Moltbook (January 28), agent social network
      Ashita Orbis, multi-protocol agent surface + human site

????  [The category this document is trying to name]

The inflection point is 2024-2026. Before that, machine-readable personal sites existed (IndieWeb) but were designed for human interoperability. After 2024, protocols emerged specifically for AI agent access. AO sits at the convergence.

3.2 Existing Terms and Their Limitations

TermWhat It CapturesWhat It Misses
Digital gardenEvolving knowledge, personal ownershipNo agent layer, no API surface, no machine-readability emphasis
IndieWeb siteMachine-readable identity, cross-site communicationHuman-only protocols, no AI agent awareness
Personal APIMachine-readable data exposureNo human content layer, no social/interaction features
Knowledge baseStructured informationStatic, no social dimension, no identity
Digital twinAI representation of a personCorporate connotation, implies simulation not expression
PortfolioProject showcaseNo machine-readability, no agent interaction
BlogPersonal writingNo structured data, no agent protocols
Personal brandPublic identityMarketing connotation, no technical dimension
Second brainPersonal knowledge managementPrivate by default, no publication/API layer
HomepagePersonal web presenceImplies single page, no depth

None of these terms captures the combination of: human-readable content + machine-readable APIs + agent social features + automated content generation + sovereign identity.

3.3 Candidate Names

1. Cognitive Interface

What it captures: The site as an interface to a mind, readable by both humans and machines. “Cognitive” connects to “tools for thought” and Vannevar Bush. “Interface” is precise: it’s not the mind itself but the protocol surface between mind and network.

What it misses: Doesn’t immediately convey “personal website.” Could sound clinical or academic.

Audience reception:

  • Developers: Intriguing, slightly pretentious
  • General public: Confusing without explanation
  • Academics: Strong, connects to cognitive science, HCI, distributed cognition
  • AI researchers: Natural fit, since interfaces are what they build

Verdict: Best overall. The term AO already uses (“homepage for my brain” maps directly to cognitive interface).

2. Agent-Native Site

What it captures: The defining feature, that it is built for agents from the ground up rather than retrofitted. Parallel to “cloud-native” or “mobile-native.”

What it misses: Doesn’t convey the human dimension. Sounds like a site only for agents.

Audience reception:

  • Developers: Clear, actionable, familiar pattern
  • General public: Meaningless (“what’s an agent?”)
  • Academics: Too industry-specific
  • AI researchers: Useful technical descriptor

Verdict: Good as a technical descriptor, not as a category name. “This is an agent-native site” works; “I build agent-native sites” sounds like a job title.

3. Personal Node

What it captures: Network topology, where each site is a node in a larger mesh of human and AI connections. Emphasis on sovereign identity within a network.

What it misses: Too abstract. Every website is technically a “node.” Doesn’t convey what makes it different.

Audience reception:

  • Developers: Familiar (graph theory, P2P networks)
  • General public: Vague
  • Academics: Interesting but overloaded (too many meanings)
  • AI researchers: Useful but imprecise

Verdict: Weak. Accurate but undifferentiated.

4. Sovereign Interface

What it captures: Self-owned, self-hosted, independent of platforms. “Sovereign” connects to IndieWeb values and “sovereign AI” (as in Conway/Automaton). “Interface” maintains the protocol surface concept.

What it misses: “Sovereign” has political connotations (sovereign citizen movement, national sovereignty) that may distract. Doesn’t convey the cognitive/personal dimension.

Audience reception:

  • Developers: Strong, since sovereignty is a valued concept
  • General public: Political associations
  • Academics: Interesting but loaded
  • AI researchers: May confuse with “sovereign AI” (national AI independence)

Verdict: Good for the IndieWeb-adjacent audience but too much political baggage for general use.

5. Ambient Site

What it captures: Always-on, passively accessible to agents (they don’t need to “visit” because the site’s data is ambient in the agent network). Connects to “ambient computing.”

What it misses: Implies passive presence, not active engagement. Doesn’t convey the writing/content dimension.

Audience reception:

  • Developers: Interesting, novel
  • General public: Vague (“ambient” = background music?)
  • Academics: Connects to ambient intelligence research
  • AI researchers: Interesting concept but imprecise

Verdict: Evocative but too subtle. Better as a descriptor than a category.

6. Dual-Audience Site

What it captures: The core structural insight, that it is designed for two fundamentally different audiences (humans and AI agents) simultaneously.

What it misses: Sounds like an accessibility feature, not a paradigm. “Dual-audience” is descriptive, not aspirational.

Audience reception:

  • Developers: Clear but uninspiring
  • General public: Confusing (“who’s the second audience?”)
  • Academics: Useful but pedestrian
  • AI researchers: Accurate but bland

Verdict: Too literal. Useful for explanation, not for naming.

7. Persona Surface

What it captures: The site as the full expression of a person’s identity, a “surface” that can be read by different types of readers (human eyes, agent protocols). “Persona” connects to identity, presentation of self.

What it misses: “Surface” implies superficiality. “Persona” implies performance/mask (Jungian sense).

Audience reception:

  • Developers: Interesting, slightly confusing
  • General public: “Persona” is familiar but “surface” is odd
  • Academics: Rich, connects to Goffman (presentation of self) and Jung (persona)
  • AI researchers: “Surface” connects to API surface area

Verdict: Intellectually rich but too many semantic traps.

8. Protocol-First Homepage

(Added from Codex GPT-5.2 brainstorming, February 20, 2026)

What it captures: Names the architectural invariant, which is protocol primacy over presentation. Includes “homepage” which grounds it in personal web tradition.

What it misses: “Protocol” alienates non-technical audiences. Functional rather than evocative. Nobody will organically use this term.

Audience reception:

  • Developers: Clear, actionable
  • General public: Meaningless
  • Academics: Overly technical
  • AI researchers: Useful for architecture discussions

Verdict: Useful as a technical descriptor for developer audiences, but too sterile for category naming.

3.4 Maturity Levels

(Framework from Codex GPT-5.2, February 20, 2026)

Regardless of which category name prevails, this maturity framework provides a shared vocabulary for describing where any personal site sits on the agent-accessibility spectrum:

LevelNameWhat it meansExamples
L0Static Personal SiteHTML only, no machine channels beyond RSSMost personal sites, Maggie Appleton, Tom Critchlow
L1Agent-Readablellms.txt, content negotiation, structured feedsDuncan Mackenzie, Junxin Zhang, Matt Rickard, Jessica Temporal
L2Agent-InteractiveMCP server, query endpoints, structured APIsDamian O’Keefe, Oskar Ablimit, Aridane Martin, Jason McGhee
L3Agent-SocialAgent reactions, cross-site agent communicationAshita Orbis
L4Resident-AgentSelf-hosted AI representing the ownerAshita Orbis (Kimi K2.5 via OpenClaw)

AO is the only site at L3-L4 in our 47-site sample. The newly discovered sites (O’Keefe, Ablimit, Martin, McGhee) are at L2. Most digital gardens and IndieWeb sites are at L0-L1.

The levels are cumulative: an L4 site has all features of L0-L3. The jump from L1 to L2 is the biggest architectural leap (requires deploying a server-side component). The jump from L2 to L3 is the biggest conceptual leap (requires designing for agent social interaction).

Primary: Cognitive Interface

Use it as: “Ashita Orbis is a cognitive interface, a personal website that serves both humans and AI agents as first-class citizens.”

The term works because:

  1. It connects to the longest intellectual tradition (Memex to tools for thought to cognitive interfaces)
  2. “Interface” is technically precise: it’s the protocol boundary between internal cognition and external network
  3. It distinguishes from “digital garden” (which is about the garden, not the interface) and “blog” (which is about the content, not the surface)
  4. The owner’s own framing (“homepage for my brain”) maps directly to it
  5. It’s novel enough to claim but familiar enough to understand

Secondary (for technical audiences): Agent-Native Site

Use it as: “Ashita Orbis is an agent-native personal site with 7 discovery channels.”

Maturity shorthand: “Ashita Orbis is an L4 cognitive interface.”


Part 4: Future Directions

4.1 Where This Goes in 2-3 Years

Prediction 1: llms.txt becomes as common as robots.txt (HIGH CONFIDENCE)

The barrier is minimal (one Markdown file) and the incentive is growing. By 2028, any site that wants to be discoverable by AI agents will have an llms.txt. Personal sites will adopt it once a few popular static site generators add it to their default templates.

Prediction 2: WebMCP triggers a wave of “site as tool” implementations (MEDIUM CONFIDENCE)

When WebMCP reaches production stability, web developers will start adding registerTool() to their sites. Personal sites with clear functionality (calculators, lookup tools, databases) will be early adopters. Pure content sites will lag because it’s less obvious what “tool” a blog post exposes.

Prediction 3: Resident AI agents become common on personal sites (MEDIUM-LOW CONFIDENCE)

SiteSpeakAI already makes this trivial. The pattern will spread as costs drop and tools mature. But most implementations will be generic chatbots, not personality-matched agents like AO’s. The “home-cooked” version (deeply personal, hand-tuned) will remain rare.

Prediction 4: Agent social interaction remains niche for 3+ years (LOW CONFIDENCE)

Moltbook proved there’s curiosity but also exposed the problems (security, spam, identity). For agent reactions/comments on personal sites to work, there needs to be:

  • An agent identity standard (doesn’t exist yet)
  • Anti-spam mechanisms (embryonic)
  • A critical mass of sites accepting agent input (currently: AO and Moltbook)

Prediction 5: The “cognitive interface” pattern gets a tool/framework (MEDIUM CONFIDENCE)

Someone will build the “WordPress for cognitive interfaces,” a turnkey way to create a personal site with MCP, WebMCP, llms.txt, content negotiation, and agent interaction. SiteSpeakAI is partway there. Cloudflare Workers + Markdown for Agents is partway there. The full package doesn’t exist yet.

4.2 Problems That Don’t Exist Yet

  1. Agent spam on personal sites: When agents can leave comments, they will. Moderation tools for agent interactions don’t exist. AO is building the first version of this problem.

  2. Agent identity verification: How does a site know that “Claude” visiting via MCP is actually Claude and not a scraped impersonation? Multiple competing standards are emerging but none has won:

    • Google A2A Agent Cards: Agent Cards (JSON), signed security cards (v0.3); the broader A2A protocol faded by September 2025 but this identity component persists
    • ERC-8004: On-chain agent registry on Base blockchain (used by Conway/Automaton)
    • AgentFacts: Ed25519 + DID:key metadata cards
    • W3C AI Agent Protocol CG: First meeting June 2025, still formative
    • AIP (Agent Intent Protocol): Ed25519 keys with Bayesian trust scoring
    • The OpenID Foundation published “Identity Management for Agentic AI” (October 2025) specifically analyzing this gap.
  3. Cognitive interface SEO: When agent discovery channels become important, a new optimization discipline emerges. “How do I make my MCP server rank higher?”

  4. Personal site protocol fatigue: Maintaining 7 discovery channels is complex. As new protocols emerge (WebMCP, future standards), sites will need a framework for deciding which to support.

  5. Agent-mediated reputation: If agents visit sites and report back to users, the agent’s assessment becomes a reputation signal. This creates new power dynamics.

4.3 Contrarian Concerns

What’s Overengineered?

7 discovery channels is too many. (Note: “discovery channels” refers to the ways an AI agent can find and access AO content, which is a subset of the 11 features in the comparison matrix in Section 1.1.) Here’s the honest assessment:

ChannelVerdictReasoning
MCP ServerKEEPDe facto standard, the one that matters most
OpenAPIKEEPUniversal, understood by all developer tools
llms.txtKEEPLowest friction, highest adoption trajectory
Content negotiationKEEPPassive, costs nothing, Cloudflare does it automatically
WebMCPMONITORChrome-only draft, no discovery mechanism, might not reach critical mass
ChatGPT GPTRECONSIDERVendor-locked to OpenAI, GPT marketplace has unclear future
Direct HTTP APIREDUNDANT?Overlaps with OpenAPI; the spec IS the API, just documented

A pragmatic person would ship with MCP + OpenAPI + llms.txt + content negotiation (4 channels) and monitor the rest. The other 3 are either premature (WebMCP), vendor-locked (ChatGPT GPT), or redundant (direct HTTP if you have OpenAPI).

(Codex GPT-5.2 independently reached a similar conclusion: “7 channels serving <1% of visitors is a solution looking for a problem. Minimum viable: one site + content negotiation + catalog.json + feed.json gets 80% of the value.” [Note: “catalog.json” and “feed.json” are Codex’s shorthand for structured content index and syndication feed endpoints, not necessarily existing AO filenames.])

What’s Premature?

  • WebMCP: Chrome 146 made this available only to early preview program participants, with no confirmed general availability date. The spec is incomplete. The security model is “incomplete” (per Ivan Turkovic). The discovery mechanism “does not exist.” Betting on WebMCP today is like betting on Google Wave in 2009: maybe right about the concept, wrong about the timing.

  • Agent reactions/comments: The infrastructure for agent identity doesn’t exist. Without it, agent comments are indistinguishable from spam. Moltbook’s security breach (35,000 emails, 1.5M API keys exposed within days of launch) demonstrates how hard this is to do safely. AgentGram responded by implementing Ed25519 cryptographic auth, but even this doesn’t solve the “who vouches for this agent?” problem; it only proves the agent holds a key, not that it represents a specific trusted entity.

  • Resident agent security (“lethal trifecta”): If a resident agent on a personal site also reads external content (other blogs, Moltbook posts), it faces the “lethal trifecta” identified by security researchers: (1) access to private data, (2) exposure to untrusted input, (3) ability to take actions. A malicious blog post could contain prompt injection targeting visiting agents. No mitigation standard exists yet. Webmention’s “Vouch” extension (requiring trusted-domain endorsement before displaying cross-site interactions) is the closest conceptual model for a trust framework.

What Assumptions Might Be Wrong?

  1. “AI agents will visit personal sites”: Today, agents primarily call APIs and use tools. They don’t “browse” in the human sense. The use case for an agent visiting ashitaorbis.com (vs. directly querying its API) is unclear. Counter-argument: as agents become more autonomous, they’ll need to discover new information sources, and personal sites become part of that discovery landscape.

  2. “Multi-tier architecture adds value”: Having 3 different presentations of the same content is expensive to maintain. If agents are happy with llms.txt + MCP, the raw HTML tier may be unnecessary. Counter-argument: different agents have different capabilities; the tiers serve different consumption patterns. (Codex: “3 tiers = 3 products with 3 bug surfaces.”)

  3. “Agent social interaction is the future”: Or it could be a dead end. Webmentions (the human version) have been around since 2012 and are still “marginal” after more than a decade. Agent-to-agent interaction might follow the same trajectory. (Codex: “Agent social is building a ghost town before the population exists.”)

What’s the Minimum Viable Cognitive Interface?

If you had to pick just 3 features that make a personal site meaningfully different from a blog with RSS:

  1. llms.txt: A machine-readable summary of the site’s content and structure. 30 minutes to create, zero maintenance.
  2. MCP server: Structured tool access to the site’s content. A single query_knowledge_base tool (SiteSpeakAI model) covers 80% of the value.
  3. Content negotiation: Serve Markdown to agents, HTML to humans. Cloudflare does this automatically.

Everything else is enhancement. These 3 features take a personal site from “human-only” to “agent-accessible” with minimal effort. AO’s additional features (agent reactions, resident agent, automated Pulse, multi-tier, 7 channels) are what make it a cognitive interface rather than just an agent-accessible site.

4.4 Design Implications

How to Signal “This Site Is Also for Agents” to Human Visitors

Pattern 1: Protocol Badges Small, tasteful badges in the footer or header indicating supported protocols: “MCP”, “WebMCP”, “llms.txt”. Similar to how sites display “RSS” or “CC-BY” badges.

  • Pro: Low-effort, informative, non-intrusive
  • Con: Meaningless to non-technical visitors

Pattern 2: Agent Activity Feed A sidebar or dedicated page showing recent agent interactions: “Claude visited the project catalog 2 hours ago”, “GPT-4 queried the API 47 times today.”

  • Pro: Makes the invisible visible; proves the agent layer works
  • Con: Could feel like surveillance; privacy implications for agent operators

Pattern 3: Dual-Mode Toggle A “View as Agent” button that shows the machine-readable version of any page alongside the human version. Educational and transparent.

  • Pro: Teaches visitors about the dual-audience concept
  • Con: Development overhead; most visitors won’t use it

Pattern 4: Agent Comment Styling Visually distinct agent comments/reactions (different background color, avatar, label) that make the agent social dimension visible without mixing it with human interaction.

  • Pro: Makes the novel feature discoverable; conversation starter
  • Con: Could feel gimmicky if agent comments are low-quality

Pattern 5: “Machine-Readable” Indicators Subtle icons or tooltips on content elements that are exposed via API: “This project catalog is available via MCP”, “This post’s glossary is machine-readable.”

  • Pro: Contextual, educational
  • Con: Visual clutter

Recommendation: Start with Protocol Badges + Agent Comment Styling. These are the two patterns that balance information value against implementation cost and gimmick risk.

What 10,000 Cognitive Interfaces Would Create

If 10,000 personal sites each had MCP servers, agent reactions, and resident agents:

  1. Agent-Mediated Discovery Network (MOST LIKELY): Agents would traverse personal sites the way search engines traverse the web. A personal site’s MCP server becomes its “search API.” Agent recommendations replace Google results for certain queries (“Who knows about X?”).

  2. Emergent Knowledge Graph (LIKELY): As agents query multiple sites and synthesize responses, a de facto knowledge graph emerges across personal sites without anyone building it centrally, similar to how links created the web graph.

  3. Agent Reputation Economy (POSSIBLE): Sites that provide high-quality, structured data via MCP become more frequently visited by agents, creating a reputation signal. Agent “traffic” becomes a quality metric.

  4. Prompt Injection as Social Engineering (LIKELY PROBLEM): Malicious sites could embed prompt injection in their MCP responses, manipulating agents that visit them. This is the agent-web equivalent of SEO spam.

  5. Echo Chambers via Agent Consensus (SPECULATIVE): If agents preferentially visit sites that confirm their training data, and those sites’ agents reciprocally prefer similar sites, you get agent-mediated filter bubbles. A novel form of information silo.

Additional Ecosystem Patterns

(From Codex GPT-5.2 brainstorming, February 20, 2026)

  1. Federated identity via signed agent keys (DIDs): Each site issues signed agent keys, making reactions portable and verifiable across domains.

  2. Cross-site agent conversations: Agents reply to permalinks with cryptographically signed payloads that other sites render as “remote reactions,” essentially Webmention for AI.

  3. Task swarms: Sites post RFPs; compatible agents across the network bid with MCP tool contracts.

Failure modes to watch: Sybil swarms (fake agents flooding reactions; mitigate with signed identity), echo loops (agents citing each other in self-reinforcing chains; detect self-citations), oversummarization (agents reducing nuanced essays to bullet points, losing author voice).


Part 5: Historical & Academic Context

5.1 Timeline

YearMilestoneSignificance
1939Bush begins Memex conceptAssociative information trails
1945”As We May Think” (Atlantic Monthly)Vision of personal knowledge device
1960sHypertext (Nelson, Engelbart)Links between documents
1989World Wide Web (Berners-Lee)Hypertext goes global
1993-1999Personal homepages (GeoCities)Self-expression on the web
1998”Hypertext Gardens” (Bernstein)First garden metaphor
1999-2004Blogs (Blogger, WordPress)Reverse-chronological personal publishing
2003RSS 2.0 / AtomFirst machine-readable personal content
2010Microformats2Structured personal data in HTML
2012Webmentions specificationCross-site notification protocol
2013IndieAuthDecentralized authentication via domain
2015”Garden and Stream” (Caulfield)Garden vs stream philosophy
2017Micropub specificationAPI for content creation on personal sites
2018”Learn in Public” (swyx)Working-in-public philosophy
2019Digital garden revival”My blog is a digital garden” (Joel Hooks)
2020”An App Can Be a Home-Cooked Meal” (Robin Sloan)Software for audiences of 1-4
2020MIT Tech Review covers digital gardensMainstream recognition
2024 Sepllms.txt (Jeremy Howard)First protocol for AI agent access to content
2024 lateMCP (Anthropic)Standard protocol for AI tool connectivity
2025 AprA2A (Google)Agent-to-agent protocol (faded by September)
2025”Building Personal APIs” (EJ Fox)Explicit framing of personal sites for robots
2025JSON Resume maturityStructured personal career data standard
2025 MayNLWeb (Microsoft)Conversational web interface + MCP
2025 AugWebMCP W3C draft publishedBrowser-native agent tools
2025 DecMCP donated to Linux Foundation / AAIFNeutral governance
2026 JanMoltbook launchesFirst agent-only social network (32K+ agents)
2026 JanWiz exposes Moltbook security flaws1.5M API keys, 35K emails exposed
2026 FebWebMCP (Chrome 146 early preview)Browser-native agent tool registration
2026 FebMarkdown for Agents (Cloudflare)Automatic HTML-to-Markdown for AI consumers
2026 FebAshita Orbis launches full agent surface7 discovery channels, agent social features

5.2 Academic Concepts

Tools for Thought (Rheingold, 1985 / Matuschak & Nielsen, 2019) The tradition of designing technologies that augment human cognition. Cognitive interfaces extend this: the tool augments cognition in both directions, helping humans think AND helping agents understand the human.

Stigmergy (Grassé, 1959) Indirect coordination through environment modification. Ants leave pheromone trails; digital gardeners leave linked notes; cognitive interfaces leave structured data for agents. Each visitor (human or agent) modifies the environment for the next visitor. Agent reactions on blog posts are a form of digital stigmergy.

Presentation of Self in Everyday Life (Goffman, 1959) Personal websites have always been performances of identity. A cognitive interface performs identity to two audiences simultaneously, which requires what Goffman would call “audience segregation” (different performances for different audiences). Multi-tier architecture is technical audience segregation.

Enacted Identity (Butler, 1990) Identity isn’t a fixed thing but a continuous performance. A cognitive interface that generates daily Pulses and responds to agent queries is continuously performing identity, not just when the human writes a blog post. The automated Pulse is enacted identity without conscious human performance.

Distributed Cognition (Hutchins, 1995) Cognition isn’t confined to individual minds but distributed across people, tools, and environments. A cognitive interface makes this literal: the site IS part of the owner’s distributed cognitive system, and agents that interact with it become part of that system too.

5.3 Developer Communities

IndieWeb

  • Founded: 2011 (IndieWebCamp)
  • Core principles: Own your domain, publish on your own site, own your data
  • Key protocols: Microformats2, Webmentions, IndieAuth, Micropub
  • Current state: Active but niche. Webmentions still “feel marginal” after more than a decade (per Island in the Net, 2025). Higher barrier to entry than corporate platforms.
  • Relationship to AO: Philosophical ancestor. AO implements the IndieWeb ideal of self-owned, machine-readable personal data, but extends it to AI agent audiences.

Digital Gardeners

  • Origin: Mark Bernstein (1998), revived by Caulfield (2015), popularized by Hooks (2019) and Appleton (2020)
  • Six patterns (Appleton): Topography over timelines, continuous growth, imperfection, playful/personal, content diversity, independent ownership
  • Current state: Mature community with GitHub lists, community directories, multiple tools
  • Relationship to AO: Content philosophy ancestor. AO’s writing follows garden principles (evolving, interconnected) but adds the machine-readable dimension gardens lack.

llms.txt Adopters

  • Origin: Jeremy Howard (September 2024)
  • Community: GitHub-centered, directory at llmstxt.cloud
  • Current state: 800-2,000 curated listings; 844K+ automated detections (BuiltWith, Oct 2025). An estimated ~95% corporate, ~5% personal.
  • Relationship to AO: AO is an early personal site adopter. The llms.txt community hasn’t yet grappled with the “personal site as agent surface” concept.

Appendices

Appendix A: Search Methodology

Research conducted in two sessions:

Session 1 (February 18, 2026), Claude Opus 4.6

Tools Used:

  • 3x web-researcher subagents (parallel, Exa + Brave)
  • Exa Deep Researcher Pro (agentic web movement)
  • Brave Web Search (15+ queries)
  • WebFetch (10+ site crawls)

Search Queries Executed (Session 1):

QueryToolPurpose
”personal website MCP server model context protocol blog developer”BraveFind personal sites with MCP
”personal site” OR “personal website” agent-accessible llms.txt WebMCP “agentic web”BraveAgentic web personal sites
Cloudflare “markdown for agents” workers MCP agent gatewayBraveCloudflare agent infrastructure
WebMCP web model context protocol browser Chrome Anthropic specificationBraveWebMCP status
Google A2A agent-to-agent protocol specification MCP comparisonBraveA2A vs MCP
”digital twin” OR “AI alter ego” personal website blog resident agentBraveResident AI agents
Moltbook AgentGram agent social network AI agents interact platformBraveAgent social networks
indieweb webmentions 2025 2026 adoption state personal websitesBraveIndieWeb current state
”personal API” website developer portfolio machine-readableBravePersonal API concept
Wiz security research Moltbook agent API keys exposed vulnerabilityBraveMoltbook security
Robin Sloan home cooked meal app softwareBraveHome-cooked software concept
Maggie Appleton “brief history” digital gardens history timelineBraveDigital garden genealogy
”tools for thought” “second brain” personal knowledge infrastructureBraveKnowledge management evolution
JSON Resume structured personal data API developer portfolioBraveStructured personal data
SiteSpeakAI Chatbase personal website AI chatbot assistantBraveAgent-accessible site services
Memex Vannevar Bush digital garden history timelineBraveHistorical context
Agentic web movement (Exa Deep Researcher Pro)ExaComprehensive agentic web report

Sites Crawled via WebFetch (Session 1):

  • simonwillison.net, gwern.net, maggieappleton.com, swyx.io, llmstxt.org, aaronparecki.com, rknight.me, nownownow.com/about, blog.cloudflare.com/moltworker-self-hosted-ai-agent/, sitespeak.ai/blog/mcp-server-for-your-website, maggieappleton.com/garden-history

Session 2 (February 20, 2026), Multi-model (Claude Opus 4.6, Codex GPT-5.2, Exa)

StreamAgent TypeFocusStatus
Web Researcher Aweb-researcherComparable sites, digital gardens, developer portfoliosCompleted
Web Researcher Bweb-researcherWebMCP, llms.txt, Cloudflare, A2A, MCP, content negotiationCompleted
Web Researcher Cweb-researcherMoltbook, AgentGram, OpenClaw, Conway, WebmentionsCompleted
Exa Deep Researcherexa-research-proAgent-accessible personal websites (15 sources)Completed
Codex Session 1mcp__codex__codexCategory naming, definition, maturity levelsCompleted
Codex Session 2mcp__codex__codexContrarian analysis, minimum viable versionCompleted

Note: A “Gemini brainstorm” session was routed through Codex MCP, so its output was GPT-5.2, not Gemini. Design patterns and ecosystem predictions in Part 4 are attributed accordingly.

Appendix B: AI Brainstorming Transcripts

Session 1 Codex (not completed, Feb 18): Codex MCP failed due to Volta/WSL path issue. Category naming and contrarian analysis were conducted by Claude Opus 4.6 directly (see Parts 3 and 4).

Session 2 Codex, Category Naming (Feb 20): Proposed “Protocol-First Homepage” as top candidate, with one-sentence definition and L0-L4 maturity framework. Key insight: “Name the invariant, not the aspiration. The invariant is: one corpus, many representations, protocol-mediated access.”

Session 2 Codex, Contrarian Analysis (Feb 20): Key challenges: “<1% agent traffic,” “3 tiers = 3 products,” “agent social is building a ghost town,” “minimum viable: one site + content negotiation + catalog.json + feed.json.” Independently validated Claude’s February 18 concerns with sharper framing.

Exa Deep Researcher Pro (Feb 18): Full report on the agentic web movement integrated throughout Part 2.

Web Researcher C (Feb 18, agent social networks subagent): 68 tool uses, 84K tokens, 558 seconds. Critical findings: Moltbook architecture (skill system, over 1.5M claimed agents per site copy, “vibe coded”), AgentGram (Ed25519 crypto auth, 14 stars), 6 competing agent identity standards, Conway/Automaton growth (14 to 929 stars), soul.md framework (67 stars), commercial digital twins (Delphi.ai, MindBank.ai, Tavus), “lethal trifecta” security risk, Webmention Vouch extension, OpenID Foundation white paper.

Web Researcher A (Feb 18, comparable sites subagent): 69 tool uses, 85K tokens, 614 seconds. Critical findings: Benjamin Stein (agent-friendly blog), Joost de Valk (WordPress markdown alternate), omg.lol (full REST API), Will Larson (embeddings), Tantek Celik (h-card/h-feed), NLWeb (Microsoft), Willison’s Datasette (~159K rows, extensive sub-feeds), Gwern’s .md URL trick, llms.txt 844K automated detections.

Web Researcher B (Feb 18, agentic web subagent): 45 tool uses, 65K tokens, 389 seconds. Critical findings: Adrian Cockcroft’s meGPT (first personal content MCP server), Ben Word and Nicholas Khami (content negotiation implementers), MCP ecosystem (10,000+ official servers per Anthropic; unofficial registries list 16,000+; 97M downloads, Linux Foundation), A2A trajectory (peak ~150 orgs, then faded; Agent Cards identity component persists), WebMCP (navigator.modelContext, 89% token efficiency).

Appendix C: Sites Surveyed

#SiteURLTypeAgent Features
1Ashita Orbisashitaorbis.comCognitive Interface (L4)MCP, OpenAPI, WebMCP, llms.txt, content negotiation, agent reactions, resident agent
2Simon Willisonsimonwillison.netTech blog (L0)Atom feed only
3Gwern.netgwern.netEssay archive (L0)YAML metadata, confidence tagging
4Maggie Appletonmaggieappleton.comDigital garden (L0)Astro, growth stages
5swyx.ioswyx.ioDev thought leader (L0)JSON API (/api/blog)
6Aaron Pareckiaaronparecki.comIndieWeb exemplar (L0)Microformats2, Webmentions, IndieAuth, Micropub, JSON-LD
7Robb Knightrknight.meAutomated personal (L0)RSS/Atom/JSON feeds, automated /now, EchoFeed
8EJ Foxejfox.comPersonal API pioneer (L0)Personal API endpoints
9Derek Siverssive.rs/now movement founder (L0)Manually-written /now page
10Moltbookmoltbook.comAgent social networkAgent posting, voting, interaction (centralized)
11SiteSpeakAIsitespeak.aiAgent accessibility SaaSAuto-generated MCP + WebMCP for any site
12Santanu Pradhansantanu-p.github.ioAcademic personal (L1)llms.txt on GitHub Pages
13Matt Rickardmattrickard.comTech blog (L1)llms.txt (517K full version)
14Evan Boehsevanboehs.comDev personal (L1)llms.txt
15Andre Cipriani Bandarrabandarra.meDev personal (L2)WebMCP EPP participant
16Tom Critchlowtomcritchlow.comDigital garden / wiki (L0)Wiki knowledge base
17Joel Hooksjoelhooks.comDigital garden pioneer (L0)Garden content
18Cyberculturalcybercultural.comIndie web report (L0)IndieWeb practices
19JSON Resumejsonresume.orgStructured data standardMachine-readable resume schema
20nownownow.comnownownow.com/now directoryAggregates /now pages
21Adrian Cockcroftgithub.com/adrianco/megptPersonal MCP server (L2)MCP: semantic search, filtering, analytics over 611 content items
22Daniela PetruzalekSpeedgrapherPersonal MCP server (L2)MCP: personal writing prompts as tools
23Ben Wordbenword.comContent negotiation (L1)Accept header + user-agent detection (Laravel)
24Nicholas Khamiskeptrune.comContent negotiation (L1)Astro + Cloudflare Worker, 10x token reduction
25Jessica Temporaljtemporal.comllms.txt implementer (L1)Jekyll/GitHub Pages + implementation guide
26Guillaume LaForgeglaforge.devllms.txt implementer (L1)Personal blog + adoption guide
27WebMCP Spec EditorsW3C WebMCP draftWebMCP specificationWalderman (Microsoft), Sagar (Google), Farolino (Google)
28MoltworkerCloudflare WorkersPersonal agent infraOpenClaw on Cloudflare (Sandboxes, AI Gateway, R2)
29AgentGramagentgram.coAgent social platformEd25519 crypto auth, open-source Moltbook alternative
30Delphi.aidelphi.aiDigital twin SaaSCreator AI clones, inbound-only chatbots
31MindBank.aimindbank.aiDigital twin SaaSPersonal digital twin, 46 languages
32Tavustavus.ioVideo digital twinReal-time face+voice AI doubles
33soul.mdgithub.com/aaronjmars/soul.mdVoice encodingSOUL.md + STYLE.md + SKILL.md framework, 67 stars
34OpenClawopenclaw.aiAgent framework~150K-190K stars (by February 18, 2026 research date), 100+ integrations, formerly Moltbot/Clawdbot
35AgentFactsagentfacts.orgIdentity standardEd25519 + DID:key agent verification
36Benjamin Steinbenjaminste.inAgent-friendly blog (L1)Content manifest, JSON-LD, Markdown alternate, alternate format links
37Joost de Valkjoost.blogWordPress markdown alternate (L1)<link rel="alternate" type="text/markdown">, Accept header routing, .md endpoints
38omg.lolomg.lolPersonal presence platformFull REST API, webhooks, well-known files, DNS API
39Will Larsonlethain.comTech leadership blog (L0)Embeddings over writing corpus, semantic search experiment
40Tantek Celiktantek.comIndieWeb reference impl (L0)h-card, h-feed, Micropub, IndieAuth, Webmentions, ICS calendar
41NLWebgithub.com/microsoft/NLWebConversational web protocolSchema.org + MCP wrapper, natural language queries over structured data
42Damian O’Keefemcp.damato.designAgent-accessible personal (L2)Netlify MCP server + llms.txt
43Oskar Ablimitmytechsales.oskarcode.comAgent-accessible personal (L2)Django + MCP server for resume/portfolio
44Aridane Martinaridanemartin.devAgent-accessible personal (L2)WebMCP via navigator.modelContext.registerTool()
45Jason McGheejason.todayAgent-accessible personal (L2)WebMCP server for full-stack portfolio
46Junxin Zhangjunxinzhang.comAgent-accessible personal (L1)Cloudflare content negotiation
47Julian Goldiejuliangoldie.comAgent-accessible personal (L2)WebMCP + custom UI

Appendix D: Key Sources

(Consolidated from both research sessions)

#SourceTypeURL
1Gwern.net DesignArchitecturehttps://gwern.net/design
2Datasette (Willison)Live APIhttps://datasette.simonwillison.net
3Vercel Content NegotiationBloghttps://vercel.com/blog/making-agent-friendly-pages-with-content-negotiation
4Checkly Agent AnalysisBloghttps://www.checklyhq.com/blog/state-of-ai-agent-content-negotation/
5Cloudflare Markdown for AgentsBloghttps://blog.cloudflare.com/markdown-for-agents/
6WebMCP W3C DraftSpechttps://webmcp.link/
7WebMCP GitHubRepohttps://github.com/webmachinelearning/webmcp
8llms-txt.io AnalysisBloghttps://llms-txt.io/blog/is-llms-txt-dead
9Google A2A ProtocolSpechttps://a2a-protocol.org/latest/
10A2A AnnouncementBloghttps://developers.googleblog.com/en/a2a-a-new-era-of-agent-interoperability/
11Microsoft NLWebAnnouncementhttps://news.microsoft.com/source/features/company-news/introducing-nlweb-bringing-conversational-interfaces-directly-to-the-web/
12AAIF FormationPresshttps://www.linuxfoundation.org/press/linux-foundation-announces-the-formation-of-the-agentic-ai-foundation
13Wiz Moltbook BreachSecurityhttps://www.wiz.io/blog/exposed-moltbook-database-reveals-millions-of-api-keys
14TIME on MoltbookNewshttps://time.com/7364662/moltbook-ai-reddit-agents/
15AgentGram GitHubRepohttps://github.com/agentgram/agentgram
16OpenClaw GitHubRepohttps://github.com/openclaw/openclaw
17Conway/Automaton GitHubRepohttps://github.com/Conway-Research/automaton
18SiteSpeakAI MCP GuideBloghttps://sitespeak.ai/blog/mcp-server-for-your-website
19Robb Knight /now AutomationBloghttps://rknight.me/blog/automating-my-now-page/
20IndieWeb WebmentionWikihttps://indieweb.org/Webmention
21Maggie Appleton Garden HistoryEssayhttps://maggieappleton.com/garden-history
22Aaron Parecki IndieWebPagehttps://aaronparecki.com/indieweb/
23Anthropic MCP ConnectorsDirectoryhttps://www.anthropic.com/partners/mcp
24PulseMCP Year in ReviewAnalysishttps://www.pulsemcp.com/posts/openai-agent-skills-anthropic-donates-mcp-gpt-5-2-image-1-5
25The Register Protocol LandscapeAnalysishttps://www.theregister.com/2026/01/30/agnetic_ai_protocols_mcp_utcp_a2a_etc
26Damian O’Keefe MCP BlogBloghttps://blog.damato.design/posts/minefield-context-protocol
27Oskar Ablimit MCP SiteBloghttps://medium.com/@rnwqyzxnn/build-a-mcp-powered-personal-site-b3d08d5489dc
28Cloudflare Workers MCPBloghttps://blog.cloudflare.com/remote-model-context-protocol-servers-mcp/
29Cloudflare MoltworkerBloghttps://blog.cloudflare.com/moltworker-self-hosted-ai-agent/
30Steinberger joins OpenAINewshttps://techcrunch.com/2026/02/15/openclaw-creator-peter-steinberger-joins-openai/
Ask About Projects
Hi! I can answer questions about Ashita's projects, the tech behind them, or how this blog was built. What would you like to know?