Semantic side-effect tracking for AI agents.
When an AI agent crashes mid-task, what happens on restart? Without effect-log, irreversible actions (emails, payments, deployments) get repeated. With effect-log, the system knows what completed and what didn't — it returns sealed results for finished work and resumes from where it left off.
Every tool declares its effect kind at registration time. This drives all recovery behavior:
| EffectKind | Recovery (completed) | Recovery (crashed) |
|---|---|---|
| ReadOnly | Replay for fresh data | Replay safely |
| IdempotentWrite | Return sealed result | Replay with same key |
| Compensatable | Return sealed result | Compensate, then replay |
| IrreversibleWrite | Return sealed result | Escalate to human |
| ReadThenWrite | Return sealed result | Escalate to human |
from effect_log import EffectKind, EffectLog, ToolDef
def send_email(args):
return smtp.send(args["to"], args["subject"], args["body"])
tools = [
ToolDef("read_file", EffectKind.ReadOnly, read_file),
ToolDef("send_email", EffectKind.IrreversibleWrite, send_email),
ToolDef("upsert", EffectKind.IdempotentWrite, upsert_record),
]
log = EffectLog(execution_id="task-001", tools=tools, storage="sqlite:///effects.db")
log.execute("read_file", {"path": "/tmp/report.csv"})
log.execute("send_email", {"to": "ceo@co.com", "subject": "Report", "body": "..."})
log.execute("upsert", {"id": "report-001", "data": data})Recovery — just add recover=True, re-run the same steps:
log = EffectLog(execution_id="task-001", tools=tools, storage="sqlite:///effects.db", recover=True)
log.execute("read_file", {"path": "/tmp/report.csv"}) # Replayed (fresh data)
log.execute("send_email", {"to": "ceo@co.com", ...}) # Sealed — NOT re-sent
log.execute("upsert", {"id": "report-001", ...}) # Replayed (idempotent)Built-in middleware for major agent frameworks:
| Framework | Middleware | Entry Point |
|---|---|---|
| LangGraph | effect_log.middleware.langgraph |
EffectLogToolNode, effect_logged_tools |
| OpenAI Agents SDK | effect_log.middleware.openai_agents |
effect_logged_agent, wrap_function_tool |
| CrewAI | effect_log.middleware.crewai |
effect_logged_crew, effect_logged_tool |
| Pydantic AI | effect_log.middleware.pydantic_ai |
effect_logged_agent, EffectLogToolset |
| Anthropic Claude API | effect_log.middleware.anthropic |
effect_logged_tool_executor, process_tool_calls |
| Bub | effect_log.middleware.bub |
effect_logged_registry, effect_logged_tool |
See examples/ for runnable demos:
crash_recovery.py— Core crash recovery demo (Phase 1 milestone)langgraph_integration.py— LangGraph ToolNode + tool wrappingopenai_agents_integration.py— OpenAI Agents SDK wrappingcrewai_integration.py— CrewAI tool + crew wrappingpydantic_ai_integration.py— Pydantic AI toolset wrappinganthropic_integration.py— Anthropic Claude API tool_usee2e_bub.py— Bub agent crash recovery for bash/file tools
A write-ahead log with two record types:
- Intent — written before execution (tool name, effect kind, input, cursor)
- Completion — written after execution (outcome, sealed response)
Intent without completion = crash detected. The recovery engine uses the effect kind to decide: replay, return sealed, compensate, or escalate.
# Rust
cargo build --release
cargo test --workspace --all-features
# Python
cd bindings/python
pip install maturin
maturin develop --all-features
pytest tests/ -v- Core library — WAL engine, recovery engine, SQLite + in-memory backends
- Python bindings — PyO3 + maturin
- Framework middleware — LangGraph, OpenAI Agents SDK, CrewAI, Pydantic AI, Anthropic Claude API, Bub
- TypeScript bindings — napi-rs, Vercel AI SDK
- Additional backends — RocksDB, S3, Restate journal
- Auto-classification — infer effect kind from HTTP methods / API metadata
The idea behind this project was inspired by a blog post from Guanlan Dai. He introduced the concepts of effect log and semantic correctness.
Apache-2.0