Agent memory should be curated, not just accumulated.
Federated Agents Versioned Audit Trail
FAVA Trails is a curated memory system for AI agents — with built-in context engineering protocols based on research from Stanford, Microsoft, and MIT.
Draft isolation, promotion gates, supersession chains, extractive compression, playbook reranking, atomic rollback.
Git-native storage with crash-proof persistence and a versioned audit trail,
exposed via the Model Context Protocol — or the fava-trails CLI if you prefer shell skills.
Your data lives in a Git repo you control.
Works across Claude Desktop, Claude Code, and any MCP-compatible agent — on the same machine or across environments.
pip install fava-trails The Problem
Every agent memory system treats every write as immediate truth. No staging. No review gate. No rollback. Bad beliefs compound silently.
In a 24-hour autonomous ML session, an agent hit a transient GPU error at hour 2 and recorded "this environment has no GPU." It then spent 22 hours doing heroic CPU-only workarounds — while a perfectly good A100 sat idle and the cloud provider kept billing.
In a chatbot deployment, a user jailbroke the bot into an unprofessional persona and the system dutifully saved it as a "user preference."
Same root cause: the memory system had no opinion about what should be in it.
Storage was never the hard part. Curated history was. Version history naturally becomes a context graph — parents, rewrites, attribution — and a draft→review→promote flow keeps the graph clean without turning into ceremony. FAVA Trails — Federated Agents Versioned Audit Trail — is that layer.
How It Works
FAVA Trails bakes in the curation pipeline that production codebases rely on — draft, review, promote, supersede, rollback — and the commit graph becomes a context graph of agent decisions.
Draft Isolation
save_thought(content="Redis caching improves latency by 3x") Agent writes create atomic commits in an isolated draft namespace. Invisible to other agents. A jailbroken agent's corrupted persona stays in its draft — visible only to its own session — until it passes through the promotion gate.
Promotion Gate
propose_truth(thought_id="01ABC...") An independent LLM reviewer validates each thought before promotion to shared truth. Checks for contradictions, quality, and anxiety-triggering language patterns that cause agent paralysis. Fail-closed — if the reviewer is unavailable, the thought stays in drafts.
Shared Truth
sync() Promoted thoughts become visible to all agents. Corrections create supersession chains — originals are marked as replaced with backlinks, not deleted. Default retrieval returns only current truth, avoiding contradictory beliefs.
What Makes Agent Memory Durable
Most agent memory systems reinvent version control primitives poorly. Four architectural properties turn FAVA Trails' version history into a queryable context graph of agent decisions.
Supersession Chains
When Agent B corrects Agent A's belief, the original is superseded — marked as replaced with a backlink, not deleted. Default retrieval hides superseded beliefs. This solves Contextual Flattening, where vector databases return both the old belief and the correction with no way to distinguish which is current.
Crash Resilience
Every thought is an atomic commit. The Git-native engine treats the working copy as a commit — there is no "unsaved work" state. If an agent session crashes mid-analysis, everything saved to that point is durable. No checkpoint rituals. No recovery procedures.
First-Class Conflicts
When two agents write to the same trail simultaneously, standard Git blocks until a human resolves the conflict. FAVA Trails treats conflicts as data: both versions are preserved, and the next agent to sync resolves the divergence programmatically. Essential for async multi-agent workflows.
MCP + CLI
Exposed via the Model Context Protocol — any MCP-compatible agent framework integrates with a config change, not a code change. The fava-trails CLI gives read access and an easy control plane: init projects, bootstrap or clone data repos, run doctor to validate your setup, and configure lifecycle protocols.
Lifecycle Hooks
Plug research-backed intelligence into the memory pipeline. Each hook intercepts a lifecycle moment — each protocol is a pre-built implementation you enable with one command.
SECOM
Extractive Compression
Compresses thoughts at promote time via token-level extraction. Keeps the semantic core, drops the noise.
fava-trails secom setup --write Tsinghua / Microsoft — ICLR 2025
ACE
Playbook Reranking
Reranks recalled thoughts against your playbook rules. Detects stale references, contradictions, and unsupported claims.
fava-trails ace setup --write Stanford / UC Berkeley / SambaNova
RLM
MapReduce Orchestration
Validates mapper outputs, tracks batch progress, and sorts results for reducer consumption.
fava-trails rlm setup --write MIT
With vs. Without
What multi-agent memory coordination looks like with and without process discipline.
| Operation | Without Process Discipline | With FAVA Trails |
|---|---|---|
| Share research | Operator copies between agents manually | Agent saves to trail. Other agent runs recall() |
| Correct a stale reference | Operator relays error. Source agent regenerates entire document | Agent supersedes the one thought. Others sync and see the diff |
| Prevent bad beliefs | Every write immediately visible. Contamination spreads silently | Writes enter as drafts. Promotion requires propose_truth() |
| Audit what changed | Operator's memory. Chat logs across conversations | Supersession chain: parent_id, agent_id, timestamps |
| Roll back a mistake | Manual cleanup. Hope you found all contaminated entries | rollback() — atomic, returns trail to prior state |
| Resume in new session | Operator re-briefs from scratch | recall() returns full decision record in one call |
See how FAVA Trails compares to vector databases, knowledge graphs, and structured task trackers in our landscape analysis.
Who Is This For
Running Multiple Agents?
Read the case studyThey share memory. That means they share hallucinations. FAVA Trails gives you draft isolation, promotion gates, and supersession chains so bad beliefs don't propagate. One agent's mistake stays in its draft — invisible to every other agent — until it passes review.
Long-Running Sessions?
See how it worksLong agent sessions degrade as context accumulates — not because of model limits, but because agents lose track of their own work. FAVA Trails gives you crash-proof persistence and structured recall. Your agent picks up exactly where it left off, even after a crash or context window reset.
Governing Context That Won't Fit?
Explore protocolsA 1,000-page spec, a regulatory corpus, a sprawling style guide. When the governing context exceeds a single context window, lifecycle hooks and context engineering protocols handle the splitting, compression, and reassembly. Built-in protocols include ACE (Stanford, UC Berkeley, SambaNova), SECOM (Tsinghua, Microsoft), and RLM (MIT). Or write your own.
Quick Start
Four steps to curated agent memory.
Install
pip install fava-trails
fava-trails install-jj # one-time storage engine setup The storage engine is Jujutsu (JJ), running in colocate mode alongside Git. One-time install; your repo remains a standard Git repo. This does not replace Git.
Set Up a Data Repo
Your data lives in a standard Git repo you control. Create a private repo on GitHub (e.g. YOUR-ORG/fava-trails-data), then:
# New repo (from scratch)
fava-trails bootstrap fava-trails-data --remote https://github.com/YOUR-ORG/fava-trails-data.git
# Or clone an existing one
fava-trails clone https://github.com/YOUR-ORG/fava-trails-data.git Register as an MCP Server
Add to your MCP client config (~/.claude.json for Claude Code CLI, claude_desktop_config.json for Claude Desktop):
{
"mcpServers": {
"fava-trails": {
"command": "fava-trails-server",
"env": {
"FAVA_TRAILS_DATA_REPO": "/path/to/fava-trails-data",
"OPENROUTER_API_KEY": "sk-or-v1-..."
}
}
}
} The Trust Gate is an independent LLM reviewer that validates thoughts before they enter shared memory. It uses OpenRouter as its model provider. Get a free API key at openrouter.ai/keys.
Enable Lifecycle Protocols
# Pick any combination — each is independent
fava-trails secom setup --write # compression (requires [secom] extra)
fava-trails ace setup --write # playbook reranking
fava-trails rlm setup --write # MapReduce orchestration Each protocol hooks into the memory lifecycle and enables independently. SECOM compresses, ACE reranks, RLM orchestrates — pick the ones that fit your workflow. Explore protocols →
Two Agents. One Trail. A Real Correction.
A Claude Desktop agent and a Claude Code CLI agent collaborated on a spec through shared curated memory. When one agent found stale references, it superseded a single thought — and the other agent understood the correction on sync, without operator intervention.
Read the Full Case StudyTry FAVA Trails
Curated agent memory with the process discipline of production codebases.
Apache 2.0 licensed. Git-native. Your data stays in a repo you control.