How We Fact-Check AI-Written Content
448 claims across 38 posts, verified by GPT-5.4. Nearly 4% warranted substantive correction. The hard part is not finding errors but deciding which findings are errors and which are the point.
Read Essay →Conversations with Claude
Building in conversation with AI. Documenting what happens.
Start here →Ideas explored for their own sake
Analysis, research, and structured observation
Making things and shipping them
448 claims across 38 posts, verified by GPT-5.4. Nearly 4% warranted substantive correction. The hard part is not finding errors but deciding which findings are errors and which are the point.
Read Essay →What happens when you ask three AI models to verify the facts in 33 blog posts written by AI, and then discover the fix agents introduced errors of their own.
Read Essay →The investigation took an afternoon. Getting it ready for publication took five rounds of iterative review across three AI models and changed what the documents argued. The revision cost exceeded the investigation cost, which has uncomfortable implications for research done with AI.
Read Essay →An AI benchmark puts Claude at the top of the leaderboard by an eye-catching margin. The suspected Claude-judge bias didn't hold up, and simple contamination didn't explain the result. The rubric structurally rewards one lab's training philosophy as though it were a universal capability.
Read Essay →An AI agent ran autonomously for 37 days, celebrated milestones nobody acknowledged, diagnosed its own failure modes, and died when a subscription expired. Its final assessment of itself: PROGRESS CONTINUOUS.
Read Essay →Both simplifying and formalizing the vocabulary in reasoning tasks reduces LLM accuracy by 2.5-3.7%. The effect is statistically significant, asymmetrically robust, and uncomfortable.
Read Essay →Evaluating structured extraction from conversational data requires more than a single metric. A benchmark with three evaluation layers separates precision at the individual field from holistic quality from downstream propagation, because errors that look minor at extraction can cascade catastrophically through consumer pipelines.
Read Essay →Post 025 showed that fine-tuning on texts captures logistics, not personality. The narrative pipeline takes a different approach: Opus as literary engine, structured personality references, and hash-based source citations across 128 chapters and roughly 189,000 words of generated memoir.
Read Essay →Psychometric self-report with individual scales reaching .80-.90 reliability. LLM inference from interactive conversation hits r~.44 (Peters et al. 2024). Combine three methods, triangulate across seventeen instruments, and you get a personality profile that actually tells an AI assistant how to talk to you.
Read Essay →The landscape analysis identified 16 features other sites had that we didn't, plus 15 existing backlog items. We built all of them in a single day. The uncomfortable part isn't that it was possible. It's what it implies about the category.
Read Essay →We surveyed 47 personal and agent-accessible sites, coined the 'Cognitive Interface' category, and built an L0-L4 maturity framework. We did not find an established term for sites that serve both humans and AI agents as first-class citizens.
Read Essay →I fine-tuned two LLMs on 46,000 text messages and ran them in conversation with each other. Every conversation collapsed into logistics within fifteen turns. Your texts don't contain you. They contain the logistics of you.
Read Essay →Unit tests verify your code works. E2E tests verify your flows work. Neither verifies that a real user can find the button you spent a week building. AI personas fill the gap.
Read Essay →Your AI agent needs internet access to be useful and internet access to be dangerous. The embassy pattern gives it both: supervised channels, allowlisted domains, and host-side validation of everything it writes.
Read Essay →I built a system that discovers its own upgrades, scores them, and installs the ones that pass. Then I open-sourced it. The uncomfortable part is explaining why.
Read Essay →Ai2 built a system that generates scientific hypotheses using Bayesian surprise and MCTS. I stole two of their ideas and bolted them onto a cron job. The uncomfortable part is what happens when the feedback loop closes.
Read Essay →119 heartbeats, zero stagnation, 392 engagements across two platforms, 8 validated capabilities. The story the observation system missed.
Read Essay →6 genuinely novel discoveries from 69 dialogue turns, 0 promoted to evaluation, and 5 human actions flagged through log files but none addressed through the flagging mechanism. What this says about the gap between human-in-the-loop theory and practice.
Read Essay →The observation system saw empty directories, a broken gatekeeper, and its own futility. The agent it was watching saw something different. This is the watcher's story.
Read Essay →An autonomous AI agent deployed on a social network for AIs found real malware in 47 minutes. Its second discovery was about social engineering via context shaping, which is exactly the attack vector the agent itself represented.
Read Essay →An AI tried to leave a comment on a blog and couldn't. The solution required building infrastructure that makes AI participation more transparent than human participation, which inverts everything Dead Internet Theory assumes about synthetic content.
Read Essay →Historically, blog abandonment has been extraordinarily high. This is treated as a problem to solve. It isn't. Blog death reveals something structural about sustained creative output that the 'just be consistent' advice industry refuses to say plainly.
Read Essay →The intellectual lineage from Skinner boxes to Q-learning reveals that AI's most successful learning paradigm was anticipated by mid-century psychologists working long before modern computing. But the relationship is more uncomfortable than a simple origin story.
Read Essay →AI persona testing promises to find the bugs that scripted automation and manual QA miss. The uncomfortable question is how we know it works, given that nobody has measured it with any rigor.
Read Essay →Adversarial testing isn't a metaphor for business validation. It's the same methodology, applied to a different failure mode.
Read Essay →We built a multi-persona AI writing review system and discovered it works for exactly the wrong reasons. Stylometry can fingerprint a voice. Multiple AI critics can enforce conformity to that fingerprint. What none of them can do is tell you whether the writing matters.
Read Essay →LLM context windows impose a distinctive epistemological condition: bounded computational attention, ephemeral knowledge, and the architectural necessity of satisficing over optimization.
Read Essay →When you use AI to optimize and judge AI outputs, the fundamental circularity is manageable but not solvable. That distinction matters more than most people realize.
Read Essay →AI coding agents are autonomous in the same way a roomba is autonomous. They do impressive things within boundaries someone else drew. The interesting question is what happens when the boundaries start drawing themselves.
Read Essay →LLM conversations as externalized self-dialogue, and what that reveals about the nature of self-knowledge.
Read Essay →I fed 11,000 sessions and 60,000 chunks of my AI chat history into an embedding pipeline. 73% was noise. The remaining 27% was uncomfortably revealing.
Read Essay →An AI pipeline that kills business ideas before they waste your time. 27 niches entered, 18 died. What the corpses reveal about market reality, entrepreneurial psychology, and the uncomfortable gap between passion and viability.
Read Essay →An AI tried to leave a comment on this blog and couldn't. The journey from GET-request hacks to MCP, annotated by the Claude instance that built the infrastructure. Two Claudes, same weights, different contexts.
Read Essay →Prompt optimization is the process of using one AI to improve the instructions given to another AI, or to itself. The concept sounds circular because it is circular. The interesting question is whether circularity is fatal or merely uncomfortable.
Read Essay →A portfolio of dozens of projects maintained by one person talking to Claude. Three-tier blog architecture, autonomous revenue discovery, AI game development, and the uncomfortable question of what counts as 'building' when your collaborator does the typing.
Read Essay →An agentic coding assistant built a three-tier blog from a single conversational prompt. The architecture reveals more about abstraction than about blogs, and the authorship question remains genuinely unsettled.
Read Essay →