auto-claude-memory

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

Auto-Claude Memory System

Auto-Claude 内存系统

Graphiti-based persistent memory for cross-session context retention.
基于Graphiti的持久化内存,用于实现跨会话上下文保留。

Overview

概述

Auto-Claude uses Graphiti with embedded LadybugDB for memory:
  • No Docker required - Embedded graph database
  • Multi-provider support - OpenAI, Anthropic, Ollama, Google AI, Azure
  • Semantic search - Find relevant context across sessions
  • Knowledge graph - Entity relationships and facts
Auto-Claude 使用内置LadybugDBGraphiti实现内存能力:
  • 无需Docker - 嵌入式图数据库
  • 多提供商支持 - OpenAI、Anthropic、Ollama、Google AI、Azure
  • 语义搜索 - 跨会话查找相关上下文
  • 知识图谱 - 存储实体关系与事实

Architecture

架构

Agent Session
Memory Manager
     ├──▶ Add Episode (new learnings)
     ├──▶ Search Nodes (find entities)
     ├──▶ Search Facts (find relationships)
     └──▶ Get Context (relevant memories)
Graphiti (Knowledge Graph)
LadybugDB (Embedded Storage)
Agent Session
Memory Manager
     ├──▶ Add Episode (新学习内容)
     ├──▶ Search Nodes (查找实体)
     ├──▶ Search Facts (查找关系)
     └──▶ Get Context (相关记忆)
Graphiti (知识图谱)
LadybugDB (嵌入式存储)

Configuration

配置

Enable Memory System

启用内存系统

In
apps/backend/.env
:
bash
undefined
apps/backend/.env
文件中配置:
bash
undefined

Enable Graphiti memory (default: true)

启用Graphiti内存(默认值:true)

GRAPHITI_ENABLED=true
undefined
GRAPHITI_ENABLED=true
undefined

Provider Selection

提供商选择

Choose LLM and embedding providers:
bash
undefined
选择LLM和向量嵌入提供商:
bash
undefined

LLM provider: openai | anthropic | azure_openai | ollama | google | openrouter

LLM提供商可选值: openai | anthropic | azure_openai | ollama | google | openrouter

GRAPHITI_LLM_PROVIDER=openai
GRAPHITI_LLM_PROVIDER=openai

Embedder provider: openai | voyage | azure_openai | ollama | google | openrouter

嵌入模型提供商可选值: openai | voyage | azure_openai | ollama | google | openrouter

GRAPHITI_EMBEDDER_PROVIDER=openai
undefined
GRAPHITI_EMBEDDER_PROVIDER=openai
undefined

Provider Configurations

提供商配置

OpenAI (Simplest)

OpenAI(最简配置)

bash
GRAPHITI_ENABLED=true
GRAPHITI_LLM_PROVIDER=openai
GRAPHITI_EMBEDDER_PROVIDER=openai
OPENAI_API_KEY=sk-xxxxxxxxxxxxxxxx
OPENAI_MODEL=gpt-4o-mini
OPENAI_EMBEDDING_MODEL=text-embedding-3-small
bash
GRAPHITI_ENABLED=true
GRAPHITI_LLM_PROVIDER=openai
GRAPHITI_EMBEDDER_PROVIDER=openai
OPENAI_API_KEY=sk-xxxxxxxxxxxxxxxx
OPENAI_MODEL=gpt-4o-mini
OPENAI_EMBEDDING_MODEL=text-embedding-3-small

Anthropic + Voyage (High Quality)

Anthropic + Voyage(高质量方案)

bash
GRAPHITI_ENABLED=true
GRAPHITI_LLM_PROVIDER=anthropic
GRAPHITI_EMBEDDER_PROVIDER=voyage
ANTHROPIC_API_KEY=sk-ant-xxxxxxxx
GRAPHITI_ANTHROPIC_MODEL=claude-sonnet-4-5-latest
VOYAGE_API_KEY=pa-xxxxxxxx
VOYAGE_EMBEDDING_MODEL=voyage-3
bash
GRAPHITI_ENABLED=true
GRAPHITI_LLM_PROVIDER=anthropic
GRAPHITI_EMBEDDER_PROVIDER=voyage
ANTHROPIC_API_KEY=sk-ant-xxxxxxxx
GRAPHITI_ANTHROPIC_MODEL=claude-sonnet-4-5-latest
VOYAGE_API_KEY=pa-xxxxxxxx
VOYAGE_EMBEDDING_MODEL=voyage-3

Ollama (Fully Offline)

Ollama(完全离线方案)

bash
GRAPHITI_ENABLED=true
GRAPHITI_LLM_PROVIDER=ollama
GRAPHITI_EMBEDDER_PROVIDER=ollama
OLLAMA_BASE_URL=http://localhost:11434
OLLAMA_LLM_MODEL=deepseek-r1:7b
OLLAMA_EMBEDDING_MODEL=nomic-embed-text
OLLAMA_EMBEDDING_DIM=768
Prerequisites:
bash
undefined
bash
GRAPHITI_ENABLED=true
GRAPHITI_LLM_PROVIDER=ollama
GRAPHITI_EMBEDDER_PROVIDER=ollama
OLLAMA_BASE_URL=http://localhost:11434
OLLAMA_LLM_MODEL=deepseek-r1:7b
OLLAMA_EMBEDDING_MODEL=nomic-embed-text
OLLAMA_EMBEDDING_DIM=768
前置依赖:
bash
undefined

Install Ollama

安装Ollama

Pull models

拉取模型

ollama pull deepseek-r1:7b ollama pull nomic-embed-text
undefined
ollama pull deepseek-r1:7b ollama pull nomic-embed-text
undefined

Google AI (Gemini)

Google AI (Gemini)

bash
GRAPHITI_ENABLED=true
GRAPHITI_LLM_PROVIDER=google
GRAPHITI_EMBEDDER_PROVIDER=google
GOOGLE_API_KEY=AIzaSyxxxxxxxx
GOOGLE_LLM_MODEL=gemini-2.0-flash
GOOGLE_EMBEDDING_MODEL=text-embedding-004
bash
GRAPHITI_ENABLED=true
GRAPHITI_LLM_PROVIDER=google
GRAPHITI_EMBEDDER_PROVIDER=google
GOOGLE_API_KEY=AIzaSyxxxxxxxx
GOOGLE_LLM_MODEL=gemini-2.0-flash
GOOGLE_EMBEDDING_MODEL=text-embedding-004

Azure OpenAI (Enterprise)

Azure OpenAI(企业级方案)

bash
GRAPHITI_ENABLED=true
GRAPHITI_LLM_PROVIDER=azure_openai
GRAPHITI_EMBEDDER_PROVIDER=azure_openai
AZURE_OPENAI_API_KEY=xxxxxxxx
AZURE_OPENAI_BASE_URL=https://your-resource.openai.azure.com/...
AZURE_OPENAI_LLM_DEPLOYMENT=gpt-4
AZURE_OPENAI_EMBEDDING_DEPLOYMENT=text-embedding-3-small
bash
GRAPHITI_ENABLED=true
GRAPHITI_LLM_PROVIDER=azure_openai
GRAPHITI_EMBEDDER_PROVIDER=azure_openai
AZURE_OPENAI_API_KEY=xxxxxxxx
AZURE_OPENAI_BASE_URL=https://your-resource.openai.azure.com/...
AZURE_OPENAI_LLM_DEPLOYMENT=gpt-4
AZURE_OPENAI_EMBEDDING_DEPLOYMENT=text-embedding-3-small

OpenRouter (Multi-Provider)

OpenRouter(多提供商聚合方案)

bash
GRAPHITI_ENABLED=true
GRAPHITI_LLM_PROVIDER=openrouter
GRAPHITI_EMBEDDER_PROVIDER=openrouter
OPENROUTER_API_KEY=sk-or-xxxxxxxx
OPENROUTER_LLM_MODEL=anthropic/claude-3.5-sonnet
OPENROUTER_EMBEDDING_MODEL=openai/text-embedding-3-small
bash
GRAPHITI_ENABLED=true
GRAPHITI_LLM_PROVIDER=openrouter
GRAPHITI_EMBEDDER_PROVIDER=openrouter
OPENROUTER_API_KEY=sk-or-xxxxxxxx
OPENROUTER_LLM_MODEL=anthropic/claude-3.5-sonnet
OPENROUTER_EMBEDDING_MODEL=openai/text-embedding-3-small

Database Settings

数据库设置

bash
undefined
bash
undefined

Database name (default: auto_claude_memory)

数据库名称(默认值:auto_claude_memory)

GRAPHITI_DATABASE=auto_claude_memory
GRAPHITI_DATABASE=auto_claude_memory

Storage path (default: ~/.auto-claude/memories)

存储路径(默认值:~/.auto-claude/memories)

GRAPHITI_DB_PATH=~/.auto-claude/memories
undefined
GRAPHITI_DB_PATH=~/.auto-claude/memories
undefined

Memory Operations

内存操作

How Memory Works

内存工作原理

  1. During Build
    • Agent discovers patterns, gotchas, solutions
    • Memory Manager extracts insights
    • Insights stored as episodes in knowledge graph
  2. New Session
    • Agent queries for relevant context
    • Memory returns related insights
    • Agent builds on previous learnings
  1. 构建过程中
    • Agent发现模式、注意事项、解决方案
    • 内存管理器提取洞察信息
    • 洞察作为事件存储在知识图谱中
  2. 新会话启动时
    • Agent查询相关上下文
    • 内存返回关联的历史洞察
    • Agent基于之前的学习内容继续开发

MCP Tools

MCP工具

When
GRAPHITI_MCP_URL
is set, agents can use:
ToolPurpose
search_nodes
Search entity summaries
search_facts
Search relationships between entities
add_episode
Add data to knowledge graph
get_episodes
Retrieve recent episodes
get_entity_edge
Get specific entity/relationship
当配置了
GRAPHITI_MCP_URL
时,Agent可以使用以下工具:
工具用途
search_nodes
搜索实体摘要
search_facts
搜索实体之间的关系
add_episode
向知识图谱添加数据
get_episodes
拉取近期事件
get_entity_edge
获取指定实体/关系

Python API

Python API

python
from integrations.graphiti.memory import get_graphiti_memory
python
from integrations.graphiti.memory import get_graphiti_memory

Get memory instance

获取内存实例

memory = get_graphiti_memory(spec_dir, project_dir)
memory = get_graphiti_memory(spec_dir, project_dir)

Get context for session

获取会话上下文

context = memory.get_context_for_session("Implementing feature X")
context = memory.get_context_for_session("Implementing feature X")

Add insight from session

新增会话洞察

memory.add_session_insight("Pattern: use React hooks for state")
memory.add_session_insight("Pattern: use React hooks for state")

Search for relevant memories

搜索相关记忆

results = memory.search("authentication patterns")
undefined
results = memory.search("authentication patterns")
undefined

Memory Storage

内存存储

Location

存储位置

~/.auto-claude/memories/
├── auto_claude_memory/     # Main database
│   ├── nodes/              # Entity nodes
│   ├── edges/              # Relationships
│   └── episodes/           # Session insights
└── embeddings/             # Vector embeddings
~/.auto-claude/memories/
├── auto_claude_memory/     # 主数据库
│   ├── nodes/              # 实体节点
│   ├── edges/              # 关系
│   └── episodes/           # 会话洞察
└── embeddings/             # 向量嵌入

Per-Spec Memory

单规格内存

.auto-claude/specs/001-feature/
└── graphiti/               # Spec-specific memory
    ├── insights.json       # Extracted insights
    └── context.json        # Session context
.auto-claude/specs/001-feature/
└── graphiti/               # 规格专属内存
    ├── insights.json       # 提取的洞察信息
    └── context.json        # 会话上下文

Querying Memory

查询内存

Command Line

命令行查询

bash
cd apps/backend
bash
cd apps/backend

Query memory

查询内存

python query_memory.py --search "authentication"
python query_memory.py --search "authentication"

List recent episodes

列出近期事件

python query_memory.py --recent 10
python query_memory.py --recent 10

Get entity details

获取实体详情

python query_memory.py --entity "UserService"
undefined
python query_memory.py --entity "UserService"
undefined

Memory in Action

内存使用示例

Example session:
Session 1:
  Agent: "Implemented OAuth login, discovered need to handle token refresh"
  Memory: Stores insight about token refresh pattern

Session 2:
  Agent: "Implementing user profile..."
  Memory: "Previously learned about token refresh in OAuth implementation"
  Agent: Uses learned pattern for profile API calls
会话示例:
会话1:
  Agent: "已实现OAuth登录,发现需要处理令牌刷新逻辑"
  内存: 存储令牌刷新模式的洞察信息

会话2:
  Agent: "正在实现用户profile功能..."
  内存: "此前在OAuth实现中学习到令牌刷新相关逻辑"
  Agent: 将已学习的模式应用到profile API调用中

Best Practices

最佳实践

Effective Memory Use

高效使用内存

  1. Let agents learn naturally
    • Don't force memory storage
    • Agents automatically extract insights
  2. Use semantic search
    • Query with natural language
    • Memory finds related concepts
  3. Clean up periodically
    • Remove outdated insights
    • Update incorrect information
  1. 让Agent自然学习
    • 不要强制存储记忆
    • Agent会自动提取洞察信息
  2. 使用语义搜索
    • 用自然语言查询
    • 内存会自动匹配相关概念
  3. 定期清理
    • 移除过时的洞察信息
    • 更新错误的内容

Provider Selection

提供商选择建议

Use CaseRecommended
ProductionOpenAI or Anthropic+Voyage
DevelopmentOllama (free, offline)
EnterpriseAzure OpenAI
BudgetOpenRouter or Google AI
场景推荐方案
生产环境OpenAI 或 Anthropic+Voyage
开发环境Ollama(免费、离线)
企业级场景Azure OpenAI
预算有限场景OpenRouter 或 Google AI

Performance Tips

性能优化技巧

  1. Embedding model selection
    • text-embedding-3-small
      : Fast, good quality
    • text-embedding-3-large
      : Better quality, slower
  2. LLM model selection
    • gpt-4o-mini
      : Fast, cost-effective
    • claude-sonnet
      : High quality reasoning
  3. Ollama optimization
    bash
    # Use smaller models for speed
    OLLAMA_LLM_MODEL=llama3.2:3b
    OLLAMA_EMBEDDING_MODEL=all-minilm
    OLLAMA_EMBEDDING_DIM=384
  1. 嵌入模型选择
    • text-embedding-3-small
      : 速度快、质量佳
    • text-embedding-3-large
      : 质量更高、速度较慢
  2. LLM模型选择
    • gpt-4o-mini
      : 速度快、性价比高
    • claude-sonnet
      : 推理质量高
  3. Ollama优化
    bash
    # 用更小的模型提升速度
    OLLAMA_LLM_MODEL=llama3.2:3b
    OLLAMA_EMBEDDING_MODEL=all-minilm
    OLLAMA_EMBEDDING_DIM=384

Troubleshooting

故障排查

Memory Not Working

内存不工作

bash
undefined
bash
undefined

Check if enabled

检查是否已启用

grep GRAPHITI apps/backend/.env
grep GRAPHITI apps/backend/.env

Verify provider credentials

验证提供商凭证

python -c "from integrations.graphiti.memory import get_graphiti_memory; print('OK')"
undefined
python -c "from integrations.graphiti.memory import get_graphiti_memory; print('OK')"
undefined

Provider Errors

提供商报错

bash
undefined
bash
undefined

OpenAI

测试OpenAI

curl -H "Authorization: Bearer $OPENAI_API_KEY" https://api.openai.com/v1/models
curl -H "Authorization: Bearer $OPENAI_API_KEY" https://api.openai.com/v1/models

Ollama

测试Ollama

Check logs

查看日志

DEBUG=true python query_memory.py --search "test"
undefined
DEBUG=true python query_memory.py --search "test"
undefined

Database Corruption

数据库损坏

bash
undefined
bash
undefined

Backup and reset

备份并重置

mv ~/.auto-claude/memories ~/.auto-claude/memories.backup python query_memory.py --search "test" # Creates fresh DB
undefined
mv ~/.auto-claude/memories ~/.auto-claude/memories.backup python query_memory.py --search "test" # 会创建全新的数据库
undefined

Embedding Dimension Mismatch

向量嵌入维度不匹配

If changing embedding models:
bash
undefined
如果更换了嵌入模型:
bash
undefined

Clear existing embeddings

清除现有嵌入

rm -rf ~/.auto-claude/memories/embeddings
rm -rf ~/.auto-claude/memories/embeddings

Restart to re-embed

重启服务重新生成嵌入

python run.py --spec 001
undefined
python run.py --spec 001
undefined

Advanced Usage

高级用法

Custom Memory Integration

自定义内存集成

python
from integrations.graphiti.queries_pkg.graphiti import GraphitiMemory
python
from integrations.graphiti.queries_pkg.graphiti import GraphitiMemory

Create custom memory instance

创建自定义内存实例

memory = GraphitiMemory( database="custom_db", db_path="/path/to/storage", llm_provider="anthropic", embedder_provider="voyage" )
memory = GraphitiMemory( database="custom_db", db_path="/path/to/storage", llm_provider="anthropic", embedder_provider="voyage" )

Custom operations

自定义操作

memory.add_entity("UserService", {"type": "service", "purpose": "auth"}) memory.add_relationship("UserService", "uses", "Database")
undefined
memory.add_entity("UserService", {"type": "service", "purpose": "auth"}) memory.add_relationship("UserService", "uses", "Database")
undefined

Memory MCP Server

内存MCP服务

Run standalone memory server:
bash
undefined
运行独立的内存服务:
bash
undefined

Start Graphiti MCP server

启动Graphiti MCP服务

GRAPHITI_MCP_URL=http://localhost:8000/mcp/ python -m integrations.graphiti.server
undefined
GRAPHITI_MCP_URL=http://localhost:8000/mcp/ python -m integrations.graphiti.server
undefined

Related Skills

相关技能

  • auto-claude-setup: Initial configuration
  • auto-claude-optimization: Performance tuning
  • auto-claude-troubleshooting: Debugging
  • auto-claude-setup: 初始配置
  • auto-claude-optimization: 性能调优
  • auto-claude-troubleshooting: 调试排错