The Problem

Every conversation with an LLM starts from zero. There is no persistent memory. No continuity. No accumulation. Whatever understanding you built in the last session — the nuances, the context, the decisions — all of it dissolves when the window closes.

Most people experience this as friction and move on. The memory bank is the solution to this problem: a structured file system that the LLM reads at the start of every session, reconstructing context from documentation rather than from memory.

Key Insight

The memory bank is not a backup. It's the mechanism of AI's persistence. Without it, every conversation starts from zero. With it, starting a new session feels like resuming a conversation.

The Structure

A memory bank is a directory of markdown files organized by function:

memory-bank/
├── 00-core/           # Foundation — identity, axioms, rarely changes
├── 01-active/         # Current work — what's happening NOW
│   ├── currentWork.md # The hot state (~100 lines, lean)
│   ├── blockers.md    # What's stuck
│   └── nextUp.md      # What's next
├── 02-patterns/       # Recurring approaches that work
├── 03-guides/         # How-to reference material
├── 04-history/        # Completed work archive
│   └── sessions/      # Session logs
└── 05-roadmap/        # Future plans

The key file is 01-active/currentWork.md. This is what the LLM reads first. It should be lean — around 100 lines — capturing only what's hot right now. Everything else is reference material the LLM can pull in as needed.

The Context Clearing Ritual

The memory bank only works if it's kept current. The context clearing ritual is the practice that ensures this:

1. Saturation

Talk at length until the LLM grips what you're putting down. Don't rush this. Let it ask questions. Respond to its translations. Keep going until its responses feel precise. You'll know because the translations will feel uncannily accurate.

2. Externalization

Direct the LLM to write or update the memory bank. Be explicit about scope: "update lightly" (just what changed), "update thoroughly" (comprehensive sweep), or "add this to the patterns section" (specific placement).

3. Context Clear

End the session. Close the window. Let it go. The context is now externalized. The conversation state lives in files, not in the LLM's window.

4. Rehydration

New session. The LLM reads the memory bank and reports back. This is reconstruction — rebuilding the mental model from the fossilized state.

5. Fidelity Check

Does the rehydrated LLM match the saturated one? Can it continue where you left off? Are the nuances preserved, or just the facts? If fidelity is low, iterate on the memory bank structure itself.

The Goal

100% fidelity across context boundary. This is aspirational but approachable. You won't hit 100%. But you can hit 95%+, which is enough for continuity to feel seamless.

What Goes In The Memory Bank

Anything that the next session needs to know:

  • Active projects — What you're building, current status, immediate next steps
  • Key relationships — People you're working with, context that matters (this is the Farley file dimension)
  • Decisions made — Why you chose X over Y, so you don't relitigate
  • Patterns — Approaches that work, tools you use, conventions you follow
  • Blockers — What's stuck and why, so the next session can attack it fresh

What Doesn't Go In

  • Stale information — Old context actively misleads. Archive aggressively.
  • Verbose dumps — Keep it lean. A 500-line currentWork.md means the LLM can't distinguish what's important.
  • Speculative conclusions — Only commit things you've verified. The memory bank is source of truth.

Common Failure Modes

Symptom Cause Fix
LLM confidently references outdated things Memory bank is stale More frequent externalization. Archive old sessions.
LLM knows facts but misses priorities currentWork.md doesn't signal what's hot Be explicit about priority. Lead with what matters now.
Rehydration feels like starting over Memory bank captures data but not understanding Include why, not just what. Decisions, reasoning, context.
LLM responses feel generic Memory bank is too broad, not enough specificity More structure. Named patterns. Concrete examples.

The Farley File Connection

James Farley, FDR's postmaster general, kept meticulous files on every person he met — their interests, family details, last conversation topics. This made every interaction feel personal and continuous.

The memory bank is Farley files for AI. But it goes further: it's not just people. It's projects, patterns, decisions, strategies — the full state of your professional and creative life, structured so an LLM can reconstruct the context and continue where you left off.

Getting Started

  1. Create a memory-bank/ directory in your home folder or project root
  2. Create 01-active/currentWork.md — write 50-100 lines about what you're currently working on
  3. At the end of your next Claude session, say: "Update the memory bank with what we covered"
  4. Start your next session by having Claude read the memory bank first
  5. Check fidelity. Iterate. The structure will evolve to fit your needs.