FAVA Trails exposes lifecycle hooks at key moments — before_save, on_recall, before_propose, and more. Protocols are pre-built hook implementations that plug into these points. Each is independent; enable any combination.
SECOM — Extractive Compression
Adapted from SECOM (Tsinghua University and Microsoft, ICLR 2025), which applies LLMLingua-2 extractive compression to conversational memory retrieval. Compresses thoughts at promote time via token-level extraction, keeping the semantic core while dramatically reducing storage.
Hook points: before_propose, before_save, on_recall
Requires the secom extra:
pip install fava-trails[secom]
First use downloads a ~700 MB BERT model from HuggingFace. Pre-download it with:
fava-trails secom warmup
Setup
One command writes the hook entries into your data repo’s config.yaml:
fava-trails secom setup --write
Or add manually:
# config.yaml
lifecycle:
before_propose:
- protocol: secom
config:
target_ratio: 0.5 # compress to ~50% of original tokens
force_tokens: [] # tokens to always keep
model: "microsoft/llmlingua-2-bert-base-multilingual-cased-meetingbank"
Key options
| Option | Default | Description |
|---|---|---|
target_ratio | 0.5 | Target compression ratio (0.0–1.0) |
force_tokens | [] | Tokens that must survive compression |
model | microsoft/llmlingua-2-bert-base-multilingual-cased-meetingbank | HuggingFace model ID |
Known limitation
SECOM destroys structured data — JSON, YAML, code blocks, and tables lose their formatting after extraction. Tag thoughts with secom-skip to opt out of compression for structured content.
ACE — Agentic Context Engineering
Based on Stanford, UC Berkeley, and SambaNova ACE. Applies playbook-driven reranking and anti-pattern detection to recalled thoughts, surfacing what matters and flagging what’s stale or contradictory.
Hook points: on_startup, on_recall, before_save, after_save, after_propose, after_supersede
Included in the base install — no extras needed.
Setup
fava-trails ace setup --write
Or add manually:
# config.yaml
lifecycle:
on_recall:
- protocol: ace
config:
scoring: multiplicative # combine rule scores multiplicatively
rules_namespace: "preferences/"
anti_patterns:
- stale_reference
- contradictory_belief
- unsupported_claim
Key options
| Option | Default | Description |
|---|---|---|
scoring | multiplicative | How rule scores combine (multiplicative or additive) |
rules_namespace | preferences/ | Namespace where playbook rules are stored |
anti_patterns | [stale_reference, contradictory_belief, unsupported_claim] | Patterns to detect and penalize |
How it works
ACE stores scoring rules as thoughts in the preferences/ namespace. When you recall, ACE reranks results by applying each rule multiplicatively — a thought that triggers an anti-pattern gets downweighted, while thoughts matching active playbook rules get boosted. Rules are themselves thoughts, so they participate in supersession and versioning.
RLM — MapReduce Orchestration
Based on MIT RLM. Validates mapper outputs, tracks batch progress, and sorts results for reducer consumption — essential for multi-agent workflows that fan out and collect.
Hook points: before_save, after_save, on_recall
Included in the base install — no extras needed.
Setup
fava-trails rlm setup --write
Or add manually:
# config.yaml
lifecycle:
before_save:
- protocol: rlm
config:
fail_mode: closed # reject invalid mapper outputs
required_fields:
- batch_id
- mapper_id
sort_key: "mapper_id" # sort order for reducer recall
Key options
| Option | Default | Description |
|---|---|---|
fail_mode | closed | closed rejects invalid outputs; open warns but allows |
required_fields | [batch_id, mapper_id] | Fields every mapper output must contain |
sort_key | mapper_id | Field used to sort results for the reducer |
How it works
When a mapper agent calls save_thought, RLM validates the metadata contains all required_fields. In closed mode, missing fields cause the save to fail — no partial batches pollute the trail. On recall, RLM sorts results by sort_key so the reducer agent receives them in deterministic order. Batch progress is tracked via metadata, letting orchestrators query completion status.
Combining Protocols
Protocols are composable. A common production stack:
lifecycle:
before_save:
- protocol: rlm # validate mapper outputs first
- protocol: ace # check anti-patterns before persisting
before_propose:
- protocol: secom # compress before promoting to truth
on_recall:
- protocol: ace # rerank recalled thoughts
- protocol: rlm # sort for reducer consumption
Hooks execute in the order listed. If a closed-mode hook rejects a thought, later hooks in the chain are skipped.
Further Reading
- FAVA Trails README — full install and MCP setup
- SECOM paper — extractive compression for conversational memory (Tsinghua/Microsoft, ICLR 2025)
- LLMLingua-2 paper — token-level extraction underlying SECOM
- ACE paper — playbook-driven context engineering (Stanford/UC Berkeley/SambaNova, ICLR 2026)
- RLM paper — MapReduce for language model orchestration (MIT)