Loading...
Loading...
Install and configure LLMem for an agent harness. Handles CLI install, plugin deployment, skill registration, and provider setup. Triggers on: "install llmem", "set up memory", "configure memory", "add llmem to harness", "memory setup".
npx skill4agent add michieldean/llmem llmem-setup# Option A: Install from source (Go binary)
git clone https://github.com/MichielDean/LLMem.git
cd LLMem && make build
# Binary at ~/.local/bin/llmem, symlinked to /usr/local/bin/llmem
# Option B: One-liner
curl -sSL https://raw.githubusercontent.com/MichielDean/LLMem/main/setup.sh | bashllmem --help
llmem statsllmem init # Interactive — detects providers
llmem init --non-interactive # Script-friendly — uses defaults~/.config/llmem/config.yaml~/.config/llmem/memory.dbcurl -fsSL https://ollama.com/install.sh | sh
ollama pull nomic-embed-text
ollama pull qwen2.5:1.5b
# Config auto-detectedexport OPENAI_API_KEY=sk-...
llmem init --non-interactivepip install ".[local]"
# Set provider.default: local in config.yamlcd LLMem && npm installinstall.js~/.agents/skills/.opencode/tools/| Platform | Plugin file | Target |
|---|---|---|
| OpenCode | | |
| Claude Code | Entire | |
| Copilot CLI | Entire | |
node install.js --platform opencode # OpenCode only
node install.js --platform claude-code # Claude Code only
node install.js --platform copilot # Copilot CLI only
node install.js --platform all # All platforms
node install.js --platform none # Skills only, no pluginsopencode.json{
"plugin": ["llmem"]
}~/.config/opencode/plugins/.opencode/tools/.opencode/tools/OPENCODE_CONFIG_DIRllmem## Memory
Plugin-managed. Search when uncertain: `llmem search "topic"`. Add when you learn: `llmem add --type fact --content "..."`.~/.claude/plugins/llmem/claude plugin install ~/.claude/plugins/llmem
# Or use --plugin-dir for testing:
claude --plugin-dir ~/.claude/plugins/llmemSessionStartllmem statsSessionEndllmem hook endingPreCompactllmemllmem-setupintrospectionintrospection-review-trackerSessionStart## Memory
Plugin-managed. Search when uncertain: `llmem search "topic"`. Add when you learn: `llmem add --type fact --content "..."`.~/.copilot/installed-plugins/_direct/llmem/copilot plugin install ~/.copilot/installed-plugins/_direct/llmem
# Or install directly from the GitHub repo:
copilot plugin install MichielDean/LLMem:plugins/agent.claude-plugin/plugin.json~/.copilot/~/.claude/## Memory
Plugin-managed. Search when uncertain: `llmem search "topic"`. Add when you learn: `llmem add --type fact --content "..."`.# CLI works
llmem --help
llmem stats
# Can add and search
llmem add --type fact --content "test memory"
llmem search "test"
# Skills are discoverable
ls ~/.agents/skills/llmem ~/.agents/skills/introspection
# Plugin deployed
# OpenCode:
ls ~/.config/opencode/plugins/llmem.js
# Claude Code:
ls ~/.claude/plugins/llmem/.claude-plugin/plugin.json
# Copilot CLI:
ls ~/.copilot/installed-plugins/_direct/llmem/.claude-plugin/plugin.json
# Optional: verify OpenCode tools
ls .opencode/tools/llmem-*.ts# Copy and enable systemd timer
cp harness/llmem-dream.service ~/.config/systemd/user/
cp harness/llmem-dream.timer ~/.config/systemd/user/
systemctl --user daemon-reload
systemctl --user enable llmem-dream.timer
systemctl --user start llmem-dream.timer~/.config/llmem/config.yamldream:Agent Session
│
├── Plugin (auto, no instructions needed)
│ ├── session.created/start → llmem stats + search behavioral/proposed → inject context
│ ├── session.idle/end → llmem hook idle/ending → extract + introspect
│ └── session.compacting → llmem context --compacting → preserve key memories
│
├── Skills (on-demand, loaded by trigger)
│ ├── llmem → CLI reference, memory types, commands
│ ├── llmem-setup → This file
│ ├── introspection → Self-assessment framework, error taxonomy
│ └── introspection-review-tracker → Review outcome tracking
│
└── Custom Tools (structural, zero-instruction)
├── llmem-search → Search memories
├── llmem-add → Add a memory
├── llmem-context → Get context for a topic
├── llmem-invalidate → Soft-delete a memory
├── llmem-stats → Show memory statistics
└── llmem-hook → Run extraction hookllmem: command not foundwhich llmemls ~/.local/bin/llmemln -s ~/.local/bin/llmem /usr/local/bin/llmemOllama not reachableollama serveollama pull nomic-embed-text~/.config/opencode/plugins/llmem.js~/.claude/plugins/llmem/ls ~/.agents/skills/llmem/node install.jsllmem statsllmem search behavioral --type self_assessment --limit 5