Layer 5: Context Manager <- Manages the LLM's context window
| Monitors token usage
| Proactive flush at 60% capacity
| Auto-compacts at 70% capacity
| Extracts facts before discarding messages
|
Layer 4: Learnings <- Self-improvement through failure tracking
| learnings/errors.md (tool failures with context)
| learnings/corrections.md (user corrections and preferences)
| Auto-injected into system prompt each session
|
Layer 3: Workspace Files <- Durable, human-readable storage
| AGENTS.md, SOUL.md, USER.md (loaded into system prompt)
| MEMORY.md (curated long-term facts)
| HEARTBEAT.md (autonomous monitoring rules)
| memory/YYYY-MM-DD.md (daily session logs)
| BM25 search across all files
|
Layer 2: Structured Memory DB <- Hierarchical vector database
| SQLite + sqlite-vec + FTS5
| Facts with embeddings (KNN similarity search)
| Auto-categorization with category-scoped search
| 3-tier retrieval: categories -> scoped facts -> flat fallback
| Reinforcement scoring with access-count boost + recency decay
|
Layer 1: Salience Tracking <- Prioritizes important facts
Access count, decay score, last accessed timestamp
High-salience facts auto-surface in initial context