← All Projects

DSPy Prompt Optimizer

production AI Tooling

Automated prompt engineering using Stanford's DSPy framework. Optimizes Claude Code skill prompts through bootstrap, copro, and iterative algorithms with cross-validation.

  • 11 of 13 optimization targets deployed
  • Bootstrap, CoPro, and iterative algorithms
  • Cross-validation with dropout regularization
  • Background optimization with progress tracking
PythonDSPyClaude Code
View on GitHub

Activity Timeline

  • Batch optimization run for 4 agents failed with exit code 144.

    Multi-agent training loop unstable, likely due to memory constraints. 3 of 5 monitoring tasks completed; primary job did not finish. Root cause unresolved.

    blockedhealth-check
  • Hostile-but-fair review framework codified; matching algorithm improved.

    Five-criteria review framework with 3-tier severity triage established. Anchor entity extraction combined with char n-grams and keyword Jaccard distance added to matching algorithm. Publication-review pipeline added as optimization target.

    featurerefactor
  • Improved review matching algorithm and publication-review optimization pipeline added.

    Matching algorithm upgraded with anchor entities, character n-grams, and keyword Jaccard similarity. New pipeline targets all 33 blog posts for prompt optimization. Critical document review framework formalized with 3-tier triage.

    featurerefactor
  • Hostile-but-fair document review framework designed and piloted on blog post.

    Five-criteria framework built for pre-publication critique: steelman opposition, weak claims, consistency, scope, evidence gaps. Applied to the agentic coding post; full analysis output not captured.

    experimentfeature
  • Phase 1.5 consistency optimization designed: 5 extraction fields, two-phase enum+COPRO approach.

    Enum discovery from 24 existing extractions via Opus categorization, followed by COPRO optimization (9 calls/field). Checkpoint gates added before Phase C and Phase 2b to control compute spend.

    architectureexperiment
  • Consistency optimization scoped: 5 low-consistency fields identified.

    Targets: relationship_dynamics, emotional_tone, emotional_arc, negotiation_patterns, implicit_assumptions. Two-phase plan — enum discovery then COPRO optimization — drafted but not yet implemented.

    experimentarchitecture