What I’m Building
A portfolio is a collection of projects. A project is a sustained bet that a particular problem is worth solving. I maintain about twenty of these bets simultaneously, all of them built by describing what I want to an AI that writes the code. The question of whether I am “building” anything is itself one of the interesting problems, though not one I’m going to resolve here.
The bottleneck in this arrangement is not implementation. In my experience, Claude can context-switch in milliseconds, hold a dozen codebases in working memory across a session, and produce functional code faster than I can describe what I want. The bottleneck is decision-making: knowing which of twenty possible next steps actually matters, which projects deserve another hour of conversation, which ones are dead and haven’t admitted it yet. The shift from “can I build this?” to “should I build this?” sounds like a cliche until you experience it as a daily resource allocation problem across a portfolio that no single person could traditionally maintain.
The Blog You’re Reading
This blog exists in three versions. Same content, same visual identity, same API, three different implementations at three levels of technical complexity.
The raw tier is plain HTML, CSS, and JavaScript. No build step, no framework, no dependencies. Files on disk served through Cloudflare Pages. Claude wrote the entire thing in one session. It works. It will continue to work when every framework discussed in this post has been deprecated, which is the point, or at least the rationalization for the tedium of maintaining it.
The static tier is Astro 5 with Tailwind and MDX. Astro’s island architecture means pages ship zero JavaScript by default, loading interactive components only where needed, which makes it the sensible choice for a content site where most pages are just text. Cloudflare acquired The Astro Technology Company in early 2026, which means the deployment story is now native rather than bolted on.
The interactive tier is Next.js 15 with React 19 and shadcn/ui. This is where the blog becomes an application: embedded game demos, a live agent chat, a playground where visitors can create interactive blocks. The components come from shadcn/ui, which primarily works by copying component code into your codebase rather than installing a traditional dependency, meaning you own the code rather than importing someone else’s abstractions.
All three tiers connect to the same Cloudflare Workers API backed by D1, which is serverless SQLite at the edge. Comments, reactions, version history, project data: everything flows through one API that all three tiers consume. D1 is not PostgreSQL. It lacks stored procedures and the complex transaction semantics of a full relational database. For a blog, this is a feature, not a limitation, because every missing capability is a decision you don’t have to make.
The three-tier architecture is not common practice, and the maintenance overhead is real, which is obvious to anyone who has tried it. I do it because the comparison raises questions I find worth answering: at what point do you actually need a framework? How far can you push the simplest possible approach? The raw tier handles content beautifully and falls apart the moment you want interactivity, which in my case meant the comment system and the interactive playground. The Astro tier handles everything a content-focused blog needs and nothing it doesn’t. The Next.js tier can do anything and costs you the complexity of being able to do anything.
The Revenue Pipeline
The revenue pipeline is a three-phase autonomous system for discovering, building, and deploying niche software products. Discovery validates hypotheses. Development builds MVPs. Deployment ships them. Each phase has terminal signals: PLAN_VALIDATED or NICHE_KILLED, MVP_READY or BUILD_FAILED, DEPLOYED or KILLED.
The critical gate is in Phase 1. There is an AI realizability audit that asks a single question: can Claude deploy this end to end without human labor beyond one-time account setup? If the answer is no, the niche dies in discovery. This kills most ideas. That’s the point. The graveyard of killed niches is itself useful data about what kinds of products are amenable to autonomous development and which ones aren’t (a future post covers this in detail).
The current project in development is a PDF to structured data converter for a specific professional audience. Boring problem. Clear value. Dozens of reusable skills have accumulated across pipeline iterations, meaning each failed project teaches the next one something about deployment automation, market validation, or infrastructure patterns.
I have not found documented successful precedents for fully autonomous niche discovery pipelines. Related concepts exist: portfolio strategies where indie developers like Max Artemov build thirty apps generating $22k per month collectively. But the specific pattern of autonomous discovery feeding into autonomous development feeding into autonomous deployment is, as far as I can determine, unproven. I’m building it anyway, because the alternative is manual market research, and I find manual market research to be a poor use of the time I could spend talking to Claude about more interesting problems.
The Games
Four active game projects in Godot and Unity. A survival roguelike, a historical gacha, an idle game, and an experiment where Claude tries to build a game with minimal human direction.
The finding worth reporting: Godot’s .tscn scene format is text. Not a binary blob, not an opaque asset database. Plain, human-readable (and therefore AI-readable) text that defines collision layers, signal connections, animation states, scene hierarchies. Claude can generate complete game scenes as text files without touching a GUI. This capability is less explored than it should be, though the Godot AI tooling community is growing, and the gap is narrowing faster than I expected when I started these projects.
The autonomous gamedev experiment is the most honest project in the portfolio, honest about its limitations. Studios in 2026 use AI for specific production tasks: content optimization, automated testing, procedural generation within human-defined constraints. Procedural generation succeeds when it is constraint-driven and curated. AI proposes options, designers choose and refine. It remains rare for anyone to ship an entire game built by an unsupervised model, because models behave unpredictably at scale and games are complex systems where unpredictable behavior compounds. I’m running the experiment not because I expect it to produce a shippable game but because the failure modes are instructive about the boundaries of autonomous creative work.
The Research Projects
Genealogy research is an information retrieval problem with an adversarial dataset. Records contradict each other. Names are misspelled. Dates are wrong. The same person appears in three census records with three different ages, and all three might be plausible given the recording practices of the era. The hard part is not finding records (multiple search APIs, document analysis with OCR, a “brick wall breaker” agent for when standard searches fail). The hard part is knowing which records to trust when they disagree, which is a credibility assessment problem that requires domain knowledge about how specific record types were created and what systematic biases they carry.
The amnesiac story is collaborative fiction. First-person fantasy from a protagonist with anterograde amnesia, meaning he cannot form new memories. A world-librarian agent maintains consistency across a growing fictional world. A writer agent produces journal entries. A curator canonizes each chapter. An editor shapes the arc. The narrator is unreliable by definition, because he cannot remember what happened yesterday and reconstructs his world from notes each morning. The agents attempt to keep the world consistent, so that the river still flows south even when the narrator has forgotten which direction he was walking, though the system is not perfect and consistency failures are part of the ongoing experiment.
The Glue Layer
Everything connects through the Model Context Protocol, which Anthropic open-sourced in November 2024. ChatGPT and Gemini added MCP support in 2025, and Anthropic donated governance to the Linux Foundation’s Agentic AI Foundation in December 2025. MCP standardizes how AI systems access external tools and data sources, and the practical consequence is that Claude can talk to Brave for web search, Exa for semantic search, Playwright for browser automation, Codex for cross-validation, and Gemini for visual analysis, all through a common protocol rather than bespoke integrations.
Two Discord bots coordinate the workspace, posting pipeline status and capability discoveries. A cron system runs every three hours, checking system state, triggering discovery agents, and reporting results. The evolution pipeline (covered in a separate post) discovers improvements to its own components, evaluates them through multi-model validation, and integrates the winners as new skills or configurations. The system that builds the system.
The Pattern
Every project follows the same loop. Describe what you want. Let Claude implement it. Review the output. Correct in batches, not line by line. Describe the next thing.
Twenty projects is too many. I know that. The portfolio strategy has documented success stories (Artemov’s thirty-app portfolio mentioned above, Viktor Seraleev reaching $60k per month), but those stories represent survivorship bias, and the developers who failed with portfolio strategies failed silently. The honest assessment: some of these projects will produce value, most will not, and I cannot predict which ones with any reliability, which is why I maintain breadth rather than committing depth to a single bet that might be wrong.
The marginal cost of starting a new project, when your development workflow is a conversation, approaches the cost of the conversation itself. The marginal cost of finishing one is higher than that framing suggests, because every project that survives discovery still needs to pass through development and deployment gates that demand sustained attention. The rational response to low initiation costs is to explore more. The irrational part is believing that breadth of exploration compensates for depth of execution. Both of these things are true. I have not reconciled them.
Comments