"I have become something of a spellcaster. I now have a little magic device that I utter my incantations into, and what results on the internet is functional programming with real-world implications."

What This Is

This is a practitioner's framework for getting outsized leverage from agentic LLMs — specifically Claude Code, but the principles apply broadly. It's not about prompting tricks or API patterns. It's about developing a practice — a way of thinking with AI that compounds over time.

Two core ideas power everything:

  1. Valence Mining — Using dialogue to crystallize your own thinking, extract structure from chaos, and build a living knowledge base.
  2. Semantic Transpilation — Taking that structured understanding and turning it into working artifacts. The specification is the program.

Together they form a loop: think clearly, then build precisely. The LLM is the medium for both.

· · ·

Philosophy

Chapter I

The Memory Bank

Your memory resets between sessions. The memory bank is how you make AI remember. A structured file system that captures your context — projects, relationships, decisions, patterns — and rehydrates each new session with continuity. Think Farley files meets version-controlled documentation.

Go deeper →

Chapter II

Valence Mining

The practice of churning tokens through dialogue until something crystallizes. Not extraction — generation. You bring raw material, the LLM translates, you respond to the translation, and new understanding emerges that neither of you had before.

Go deeper →

Chapter III

Semantic Transpilation

Structured intent → LLM → working artifact. The key insight: enough context collapses the probability distribution. With sufficient specificity, the LLM's output becomes effectively deterministic. The specification is the program.

Go deeper →

Chapter IV

Task Tracking & Execution

Where intent becomes action. A lightweight task system (issues with dependencies, statuses, and hierarchy) bridges understanding and building. Beads was the concept; TD is the refined implementation — same idea, 200x smaller.

Go deeper →

Chapter V

Ephemeral Software

When transpilation is fast enough, software becomes disposable. Generated on demand, shaped to the moment. Custom becomes cheaper than generic. The build-vs-buy calculus inverts.

Go deeper →

Chapter VI

The Spellcaster

The full pipeline: voice interfaces as wands, spoken words as incantations, working artifacts as spells. When all the layers compose, you can sit with someone at coffee, hear their problem, and conjure a solution without touching a keyboard.

Go deeper →

Tooling Reference

Commands

Slash commands that extend Claude Code's capabilities — from session management (/prepare, /reorient) to execution workflows (/plan, /act) to Git operations.

Full reference →

Skills

Contextual capabilities that activate automatically — browser automation, email, Linear issue tracking, Supabase, deployment tools, and more. Skills load into the current conversation when triggered.

Full reference →

Agents

Specialized subagents for isolated, complex workflows — analysis, coding, planning, research, testing. Largely superseded by skills and direct work in the main conversation, but available when isolation is needed.

Full reference →

· · ·

The Stack

Voice / Text (incantation)
Pages (unstructured thought)
Tables (structured data)
Issues / Todos (structured intent)
Semantic Transpilation via LLM
Working Artifact (code, site, document)

How To Use This Guide

If you have four hours, start with The Memory Bank and Task Tracking. These are the infrastructure. Get these running and everything else follows.

If you have a weekend, read it all — but do it with Claude open. Build your memory bank as you read. Set up TD. Let the guide bootstrap itself.

Core Principle

LLMs, like psychedelics, are non-specific amplifiers. You get out what you put in. Vague intent produces random output. Specific intent produces deterministic output.