llmem-setup

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

LLMem Setup

LLMem 安装配置

Install, configure, and integrate LLMem into an agent's harness so it can use structured memory.
安装、配置并将LLMem集成到Agent的harness中,使其能够使用结构化内存。

When to Run

运行时机

  • Setting up memory for a new agent
  • Adding memory to an existing agent harness
  • After cloning LLMem and before first use
  • When an agent asks "how do I get llmem working?"
  • 为新Agent设置内存时
  • 为现有Agent harness添加内存时
  • 克隆LLMem后首次使用前
  • 当Agent询问“如何让llmem正常工作?”时

Installation Philosophy

安装理念

Plugin-first, zero-config instructions. LLMem uses platform plugins to inject memory context automatically at session start, extract memories on idle/end, and preserve context during compaction. This means:
  • No manual instruction editing required. The plugin handles automatic lifecycle hooks.
  • Skills provide on-demand behavioral guidance. When the agent encounters a memory-related situation, it loads the skill. No need to paste 80 lines of instructions into AGENTS.md.
  • One line in config enables everything. Add the plugin and you're done.
优先插件化,零配置指引。 LLMem使用平台插件在会话启动时自动注入内存上下文,在空闲/结束时提取记忆,并在压缩过程中保留上下文。这意味着:
  • 无需手动编辑指令。 插件会自动处理生命周期钩子。
  • Skill提供按需行为指引。 当Agent遇到内存相关场景时,会加载对应的Skill。无需将80行指令粘贴到AGENTS.md中。
  • 配置中一行代码即可启用所有功能。 添加插件即可完成配置。

Procedure

操作步骤

Step 1: Install LLMem CLI

步骤1:安装LLMem CLI

bash
undefined
bash
undefined

Option A: Install from source (Go binary)

选项A:从源码安装(Go二进制文件)

git clone https://github.com/MichielDean/LLMem.git cd LLMem && make build
git clone https://github.com/MichielDean/LLMem.git cd LLMem && make build

Binary at ~/.local/bin/llmem, symlinked to /usr/local/bin/llmem

二进制文件位于~/.local/bin/llmem,已软链接到/usr/local/bin/llmem

Option B: One-liner

选项B:一键安装


Verify:
```bash
llmem --help
llmem stats

验证:
```bash
llmem --help
llmem stats

Step 2: Initialize

步骤2:初始化

bash
llmem init          # Interactive — detects providers
llmem init --non-interactive  # Script-friendly — uses defaults
This creates
~/.config/llmem/config.yaml
and
~/.config/llmem/memory.db
.
bash
llmem init          # 交互式模式——自动检测提供商
llmem init --non-interactive  # 脚本友好模式——使用默认配置
此命令会创建
~/.config/llmem/config.yaml
~/.config/llmem/memory.db

Step 3: Configure Provider

步骤3:配置提供商

Choose one:
Ollama (local, free):
bash
curl -fsSL https://ollama.com/install.sh | sh
ollama pull nomic-embed-text
ollama pull qwen2.5:1.5b
选择以下其中一种:
Ollama(本地,免费):
bash
curl -fsSL https://ollama.com/install.sh | sh
ollama pull nomic-embed-text
ollama pull qwen2.5:1.5b

Config auto-detected

配置会被自动检测


**OpenAI (cloud, needs API key):**
```bash
export OPENAI_API_KEY=sk-...
llmem init --non-interactive
Local (sentence-transformers, no server):
bash
pip install ".[local]"

**OpenAI(云端,需要API密钥):**
```bash
export OPENAI_API_KEY=sk-...
llmem init --non-interactive
本地模式(sentence-transformers,无需服务器):
bash
pip install ".[local]"

Set provider.default: local in config.yaml

在config.yaml中设置provider.default: local


**None (FTS5-only mode):** Works without any provider. Semantic search disabled.

**无提供商(仅FTS5模式):** 无需任何提供商即可运行。语义搜索功能将被禁用。

Step 4: Install Plugin and Skills

步骤4:安装插件与Skill

The recommended approach — fully automatic:
The npm postinstall script deploys everything: skills, platform plugin, and tools.
bash
cd LLMem && npm install
This runs
install.js
which:
  1. Copies 4 skill directories to
    ~/.agents/skills/
  2. Auto-detects your platform (OpenCode, Claude Code, Copilot CLI)
  3. Deploys the correct plugin to the right location
  4. Deploys OpenCode custom tools to
    .opencode/tools/
    (if OpenCode detected)
Manual plugin deployment:
If you can't use npm install, deploy manually:
PlatformPlugin fileTarget
OpenCode
plugins/opencode/llmem.js
~/.config/opencode/plugins/llmem.js
Claude CodeEntire
plugins/agent/
directory
~/.claude/plugins/llmem/
Copilot CLIEntire
plugins/agent/
directory
~/.copilot/installed-plugins/_direct/llmem/
Force a specific platform:
bash
node install.js --platform opencode    # OpenCode only
node install.js --platform claude-code # Claude Code only
node install.js --platform copilot     # Copilot CLI only
node install.js --platform all         # All platforms
node install.js --platform none         # Skills only, no plugins
推荐方式——全自动安装:
npm的postinstall脚本会部署所有内容:Skill、平台插件和工具。
bash
cd LLMem && npm install
此命令会运行
install.js
,完成以下操作:
  1. 将4个Skill目录复制到
    ~/.agents/skills/
  2. 自动检测你的平台(OpenCode、Claude Code、Copilot CLI)
  3. 将对应插件部署到正确位置
  4. 若检测到OpenCode,将自定义工具部署到
    .opencode/tools/
手动部署插件:
若无法使用npm install,可手动部署:
平台插件文件目标位置
OpenCode
plugins/opencode/llmem.js
~/.config/opencode/plugins/llmem.js
Claude Code整个
plugins/agent/
目录
~/.claude/plugins/llmem/
Copilot CLI整个
plugins/agent/
目录
~/.copilot/installed-plugins/_direct/llmem/
指定特定平台:
bash
node install.js --platform opencode    # 仅OpenCode
node install.js --platform claude-code # 仅Claude Code
node install.js --platform copilot     # 仅Copilot CLI
node install.js --platform all         # 所有平台
node install.js --platform none         # 仅安装Skill,不安装插件

Step 5: Configure Agent (Platform-Specific)

步骤5:配置Agent(平台专属)

OpenCode

OpenCode

Add the plugin to your
opencode.json
:
json
{
  "plugin": ["llmem"]
}
Or, if using local plugin deployment (the file was already copied by install.js), no config needed — OpenCode auto-discovers plugins in
~/.config/opencode/plugins/
.
Custom tools (
.opencode/tools/
) are auto-discovered by OpenCode when working in the LLMem repo. For other projects, copy the
.opencode/tools/
directory or reference it via
OPENCODE_CONFIG_DIR
.
Instructions in AGENTS.md — optional. The plugin injects context at session start. The
llmem
skill loads on-demand. If you want a persistent reminder, add this minimal line:
markdown
undefined
将插件添加到你的
opencode.json
中:
json
{
  "plugin": ["llmem"]
}
若使用本地插件部署(install.js已自动复制文件),则无需额外配置——OpenCode会自动发现
~/.config/opencode/plugins/
中的插件。
自定义工具
.opencode/tools/
)在LLMem仓库中工作时会被OpenCode自动发现。对于其他项目,复制
.opencode/tools/
目录或通过
OPENCODE_CONFIG_DIR
引用它。
AGENTS.md中的指引——可选。 插件会在会话启动时注入上下文。
llmem
Skill会按需加载。若需要持久化提示,可添加以下极简内容:
markdown
undefined

Memory

内存

Plugin-managed. Search when uncertain:
llmem search "topic"
. Add when you learn:
llmem add --type fact --content "..."
.
undefined
由插件管理。不确定时搜索:
llmem search "主题"
。学到新内容时添加:
llmem add --type fact --content "..."
undefined

Claude Code

Claude Code

The plugin is installed at
~/.claude/plugins/llmem/
. Enable it:
bash
claude plugin install ~/.claude/plugins/llmem
插件已安装到
~/.claude/plugins/llmem/
。启用插件:
bash
claude plugin install ~/.claude/plugins/llmem

Or use --plugin-dir for testing:

或使用--plugin-dir进行测试:

claude --plugin-dir ~/.claude/plugins/llmem

The plugin provides:
- **`SessionStart` hook**: Injects `llmem stats` + behavioral patterns + proposed procedures at session start
- **`SessionEnd` hook**: Runs `llmem hook ending` for memory extraction + introspection
- **`PreCompact` hook**: Injects key memories before compaction
- **Skills**: `llmem`, `llmem-setup`, `introspection`, `introspection-review-tracker` — loaded on-demand

**Instructions in CLAUDE.md — optional.** The `SessionStart` hook injects context. If you want a persistent reminder:

```markdown
claude --plugin-dir ~/.claude/plugins/llmem

插件提供以下功能:
- **`SessionStart`钩子**:在会话启动时注入`llmem stats` + 行为模式 + 建议流程
- **`SessionEnd`钩子**:运行`llmem hook ending`进行内存提取与自省
- **`PreCompact`钩子**:在压缩前注入关键记忆
- **Skill**:`llmem`、`llmem-setup`、`introspection`、`introspection-review-tracker`——按需加载

**CLAUDE.md中的指引——可选。** `SessionStart`钩子会注入上下文。若需要持久化提示:

```markdown

Memory

内存

Plugin-managed. Search when uncertain:
llmem search "topic"
. Add when you learn:
llmem add --type fact --content "..."
.
undefined
由插件管理。不确定时搜索:
llmem search "主题"
。学到新内容时添加:
llmem add --type fact --content "..."
undefined

Copilot CLI

Copilot CLI

The plugin is installed at
~/.copilot/installed-plugins/_direct/llmem/
. Enable it:
bash
copilot plugin install ~/.copilot/installed-plugins/_direct/llmem
插件已安装到
~/.copilot/installed-plugins/_direct/llmem/
。启用插件:
bash
copilot plugin install ~/.copilot/installed-plugins/_direct/llmem

Or install directly from the GitHub repo:

或直接从GitHub仓库安装:

copilot plugin install MichielDean/LLMem:plugins/agent

Copilot CLI uses the same plugin format as Claude Code (`.claude-plugin/plugin.json`) but installs to `~/.copilot/` instead of `~/.claude/`. The hooks and skills are identical.

**Instructions in COPILOT.md — optional.** If you want a persistent reminder:

```markdown
copilot plugin install MichielDean/LLMem:plugins/agent

Copilot CLI使用与Claude Code相同的插件格式(`.claude-plugin/plugin.json`),但安装到`~/.copilot/`而非`~/.claude/`。钩子和Skill完全相同。

**COPILOT.md中的指引——可选。** 若需要持久化提示:

```markdown

Memory

内存

Plugin-managed. Search when uncertain:
llmem search "topic"
. Add when you learn:
llmem add --type fact --content "..."
.
undefined
由插件管理。不确定时搜索:
llmem search "主题"
。学到新内容时添加:
llmem add --type fact --content "..."
undefined

Step 6: Verify

步骤6:验证

bash
undefined
bash
undefined

CLI works

CLI正常工作

llmem --help llmem stats
llmem --help llmem stats

Can add and search

可添加和搜索记忆

llmem add --type fact --content "test memory" llmem search "test"
llmem add --type fact --content "test memory" llmem search "test"

Skills are discoverable

Skill可被发现

ls ~/.agents/skills/llmem ~/.agents/skills/introspection
ls ~/.agents/skills/llmem ~/.agents/skills/introspection

Plugin deployed

插件已部署

OpenCode:

OpenCode:

ls ~/.config/opencode/plugins/llmem.js
ls ~/.config/opencode/plugins/llmem.js

Claude Code:

Claude Code:

ls ~/.claude/plugins/llmem/.claude-plugin/plugin.json
ls ~/.claude/plugins/llmem/.claude-plugin/plugin.json

Copilot CLI:

Copilot CLI:

ls ~/.copilot/installed-plugins/_direct/llmem/.claude-plugin/plugin.json
ls ~/.copilot/installed-plugins/_direct/llmem/.claude-plugin/plugin.json

Optional: verify OpenCode tools

可选:验证OpenCode工具

ls .opencode/tools/llmem-*.ts
undefined
ls .opencode/tools/llmem-*.ts
undefined

Step 7: Dream Timer (Optional)

步骤7:梦境定时器(可选)

For automatic memory consolidation:
bash
undefined
用于自动整合记忆:
bash
undefined

Copy and enable systemd timer

复制并启用systemd定时器

cp harness/llmem-dream.service ~/.config/systemd/user/ cp harness/llmem-dream.timer ~/.config/systemd/user/ systemctl --user daemon-reload systemctl --user enable llmem-dream.timer systemctl --user start llmem-dream.timer

Runs nightly at 3am by default. Configure in `~/.config/llmem/config.yaml` under `dream:`.
cp harness/llmem-dream.service ~/.config/systemd/user/ cp harness/llmem-dream.timer ~/.config/systemd/user/ systemctl --user daemon-reload systemctl --user enable llmem-dream.timer systemctl --user start llmem-dream.timer

默认在每日凌晨3点运行。可在`~/.config/llmem/config.yaml`的`dream:`部分进行配置。

Architecture

架构

Agent Session
    ├── Plugin (auto, no instructions needed)
    │   ├── session.created/start → llmem stats + search behavioral/proposed → inject context
    │   ├── session.idle/end      → llmem hook idle/ending → extract + introspect
    │   └── session.compacting    → llmem context --compacting → preserve key memories
    ├── Skills (on-demand, loaded by trigger)
    │   ├── llmem                      → CLI reference, memory types, commands
    │   ├── llmem-setup                → This file
    │   ├── introspection              → Self-assessment framework, error taxonomy
    │   └── introspection-review-tracker → Review outcome tracking
    └── Custom Tools (structural, zero-instruction)
        ├── llmem-search   → Search memories
        ├── llmem-add      → Add a memory
        ├── llmem-context  → Get context for a topic
        ├── llmem-invalidate → Soft-delete a memory
        ├── llmem-stats    → Show memory statistics
        └── llmem-hook     → Run extraction hook
The plugin handles everything the agent physically cannot do itself (inject context before the first message, extract on idle). The skills provide behavioral guidance when the agent needs it. Custom tools provide typed access to memory operations without requiring skill loading.
Agent Session
    ├── Plugin(自动运行,无需指令)
    │   ├── session.created/start → llmem stats + 搜索行为/建议流程 → 注入上下文
    │   ├── session.idle/end      → llmem hook idle/ending → 提取记忆 + 自省
    │   └── session.compacting    → llmem context --compacting → 保留关键记忆
    ├── Skills(按需加载,触发时启动)
    │   ├── llmem                      → CLI参考、记忆类型、命令
    │   ├── llmem-setup                → 本文件
    │   ├── introspection              → 自我评估框架、错误分类
    │   └── introspection-review-tracker → 评审结果追踪
    └── Custom Tools(结构化,零指令)
        ├── llmem-search   → 搜索记忆
        ├── llmem-add      → 添加记忆
        ├── llmem-context  → 获取主题上下文
        ├── llmem-invalidate → 软删除记忆
        ├── llmem-stats    → 显示记忆统计信息
        └── llmem-hook     → 运行提取钩子
插件负责处理Agent自身无法完成的操作(在第一条消息前注入上下文、空闲时提取记忆)。Skill在Agent需要时提供行为指引。自定义工具为内存操作提供类型化访问,无需加载Skill。

Troubleshooting

故障排查

llmem: command not found
— Binary not on PATH. Check
which llmem
or
ls ~/.local/bin/llmem
. May need
ln -s ~/.local/bin/llmem /usr/local/bin/llmem
.
Ollama not reachable
— Start Ollama (
ollama serve
), pull models (
ollama pull nomic-embed-text
), or switch providers. LLMem falls back: Ollama → OpenAI → Anthropic → local → none (FTS5-only).
Plugin not loading — Verify the plugin file exists at the expected path. For OpenCode, check
~/.config/opencode/plugins/llmem.js
. For Claude Code, check
~/.claude/plugins/llmem/
.
Skills not discovered — Verify skill directories:
ls ~/.agents/skills/llmem/
. If missing, re-run
node install.js
.
Context not injected at session start — Check the plugin log. For OpenCode, run
llmem stats
and
llmem search behavioral --type self_assessment --limit 5
manually to verify the commands work. The plugin runs these same commands.
llmem: command not found
—— 二进制文件不在PATH中。检查
which llmem
ls ~/.local/bin/llmem
。可能需要执行
ln -s ~/.local/bin/llmem /usr/local/bin/llmem
Ollama not reachable
—— 启动Ollama(
ollama serve
),拉取模型(
ollama pull nomic-embed-text
),或切换提供商。LLMem会按以下顺序降级:Ollama → OpenAI → Anthropic → 本地 → 无(仅FTS5)。
插件未加载 —— 验证插件文件是否存在于预期路径。对于OpenCode,检查
~/.config/opencode/plugins/llmem.js
。对于Claude Code,检查
~/.claude/plugins/llmem/
Skill未被发现 —— 验证Skill目录:
ls ~/.agents/skills/llmem/
。若缺失,重新运行
node install.js
会话启动时未注入上下文 —— 检查插件日志。对于OpenCode,手动运行
llmem stats
llmem search behavioral --type self_assessment --limit 5
以验证命令是否正常工作。插件运行的是相同的命令。