skill-system-memory
Compare original and translation side by side
🇺🇸
Original
English🇨🇳
Translation
ChineseSkill System Memory (PostgreSQL)
技能系统内存(PostgreSQL)
Persistent shared memory for all AI agents. PostgreSQL 14+ on Linux or Windows.
Memory failures look like intelligence failures — this skill ensures the right memory is retrieved at the right time.
面向所有AI Agent的持久化共享内存,支持Linux或Windows系统上的PostgreSQL 14及以上版本。
内存故障会表现为智能失效——本技能确保在正确的时间检索到正确的内存。
Quick Start
快速开始
Database and all functions are created by in this skill directory.
agent_memoryinit.sqlbash
undefined数据库及所有函数由本技能目录下的创建。
agent_memoryinit.sqlbash
undefinedLinux — replace 'postgres' with your PostgreSQL superuser if different (e.g. your system username)
Linux — 如果你的PostgreSQL超级用户不是postgres,请替换(例如你的系统用户名)
psql -U postgres -c "CREATE DATABASE agent_memory;"
psql -U postgres -d agent_memory -f init.sql
psql -U postgres -c "CREATE DATABASE agent_memory;"
psql -U postgres -d agent_memory -f init.sql
Windows (adjust path to your psql.exe; replace 'postgres' with your PG superuser if needed)
Windows(调整psql.exe的路径;如果需要,替换postgres为你的PG超级用户)
& "C:\Program Files\PostgreSQL\18\bin\psql.exe" -U postgres -c "CREATE DATABASE agent_memory;"
& "C:\Program Files\PostgreSQL\18\bin\psql.exe" -U postgres -d agent_memory -f init.sql
> **Note**: If your PostgreSQL installation does not have a `postgres` role, use your actual
> PostgreSQL superuser name. On many Linux distros this matches your OS username.
> You can override at any time by setting `PGUSER` before running scripts:
> `export PGUSER=your_pg_username` (Linux/macOS) or `$env:PGUSER = "your_pg_username"` (PowerShell).
Verify: `SELECT * FROM memory_health_check();`& "C:\Program Files\PostgreSQL\18\bin\psql.exe" -U postgres -c "CREATE DATABASE agent_memory;"
& "C:\Program Files\PostgreSQL\18\bin\psql.exe" -U postgres -d agent_memory -f init.sql
> **注意**:如果你的PostgreSQL安装没有`postgres`角色,请使用实际的PostgreSQL超级用户名。在许多Linux发行版中,该用户名与你的系统用户名一致。
> 你可以随时通过在运行脚本前设置`PGUSER`来覆盖:
> `export PGUSER=your_pg_username`(Linux/macOS)或`$env:PGUSER = "your_pg_username"`(PowerShell)。
验证:`SELECT * FROM memory_health_check();`Pure Skill Mode (default)
纯技能模式(默认)
This skill works without installing any plugin. In pure skill mode:
- you manually run scripts when you want (progressive disclosure)
- no global OpenCode config is modified automatically
本技能无需安装任何插件即可运行。在纯技能模式下:
- 你可以在需要时手动运行脚本(渐进式披露)
- 不会自动修改全局OpenCode配置
Optional bootstrap (asks + records choices + tries to install)
可选引导(询问并记录选择+尝试安装)
Notes:
- Interactive mode defaults to NOT installing heavy optional components.
- Use /
-InstallAllonly when you're ready to install everything.--install-all
Run the bootstrap script to choose optional components (pgpass, local embeddings, pgvector) and record decisions.
Bootstrap can also optionally install the OpenCode compaction logging plugin (it will copy the plugin into your OpenCode plugins directory).
Windows:
powershell
undefined注意:
- 交互模式默认不安装大型可选组件。
- 仅当你准备好安装所有组件时,才使用/
-InstallAll参数。--install-all
运行引导脚本以选择可选组件(pgpass、本地嵌入、pgvector)并记录决策。
引导脚本还可以选择安装OpenCode压缩日志插件(它会将插件复制到你的OpenCode插件目录)。
Windows:
powershell
undefinedrun from the skill directory
从技能目录运行
powershell.exe -NoProfile -ExecutionPolicy Bypass -File "scripts\bootstrap.ps1"
Linux/macOS:
```bashpowershell.exe -NoProfile -ExecutionPolicy Bypass -File "scripts\bootstrap.ps1"
Linux/macOS:
```bashrun from the skill directory
从技能目录运行
bash "scripts/bootstrap.sh"
The selection record is stored at:
- `~/.config/opencode/skill-system-memory/setup.json`
Agent rule:
- If this file does not exist, ask the user if they want to enable optional components.
- Recommended: run bootstrap with all options enabled (then fix any failures it reports).
On Windows, pgvector installation follows the official pgvector instructions (Visual Studio C++ + `nmake /F Makefile.win`). The bootstrap will attempt to install prerequisites via `winget`.bash "scripts/bootstrap.sh"
选择记录存储在:
- `~/.config/opencode/skill-system-memory/setup.json`
Agent规则:
- 如果该文件不存在,询问用户是否要启用可选组件。
- 推荐:启用所有选项运行引导脚本(然后修复它报告的任何故障)。
在Windows上,pgvector安装遵循官方pgvector说明(Visual Studio C++ + `nmake /F Makefile.win`)。引导脚本将尝试通过`winget`安装依赖项。Optional automation: compaction logging (OpenCode plugin)
可选自动化:压缩日志(OpenCode插件)
If you want automatic compaction logging, install the OpenCode plugin template shipped with this skill.
Option A (recommended): run bootstrap and choose the plugin option.
- Copy to
plugins/skill-system-memory.js~/.config/opencode/plugins/ - Restart OpenCode
Safety / rollback (if OpenCode gets stuck on startup):
- Remove or rename
~/.config/opencode/plugins/skill-system-memory.js - Restart OpenCode
- Check logs:
- macOS/Linux:
~/.local/share/opencode/log/ - Windows:
%USERPROFILE%\.local\share\opencode\log
- macOS/Linux:
Plugin behavior notes:
- The plugin is designed to be a no-op unless you explicitly enabled it via bootstrap (sets
setup.json).selected.opencode_plugin=true - It only attempts a Postgres write if (avoids hanging on auth prompts).
selected.pgpass=true
Uninstall:
- Remove
~/.config/opencode/plugins/skill-system-memory.js - Restart OpenCode
如果你需要自动压缩日志,请安装本技能附带的OpenCode插件模板。
选项A(推荐):运行引导脚本并选择插件选项。
- 将复制到
plugins/skill-system-memory.js~/.config/opencode/plugins/ - 重启OpenCode
安全/回滚(如果OpenCode启动时卡住):
- 删除或重命名
~/.config/opencode/plugins/skill-system-memory.js - 重启OpenCode
- 查看日志:
- macOS/Linux:
~/.local/share/opencode/log/ - Windows:
%USERPROFILE%\.local\share\opencode\log
- macOS/Linux:
插件行为说明:
- 除非你通过引导脚本明确启用(设置
setup.json),否则该插件默认无操作。selected.opencode_plugin=true - 仅当时才会尝试Postgres写入(避免因身份验证提示而挂起)。
selected.pgpass=true
卸载:
- 删除
~/.config/opencode/plugins/skill-system-memory.js - 重启OpenCode
Credentials (psql)
凭据(psql)
Do NOT hardcode passwords in scripts, skill docs, or config files.
Recommended options for non-interactive :
psql- /
.pgpass(recommended)pgpass.conf- Linux/macOS: (must be
~/.pgpassor libpq will ignore it)chmod 0600 ~/.pgpass - Windows: (example:
%APPDATA%\postgresql\pgpass.conf)C:\Users\<you>\AppData\Roaming\postgresql\pgpass.conf - Format:
hostname:port:database:username:password - Docs: https://www.postgresql.org/docs/current/libpq-pgpass.html
- Linux/macOS:
- (optional override): point to a custom location for the password file
PGPASSFILE - (not recommended): only for quick local testing; environment variables can leak on some systems
PGPASSWORD
Tip: set connection defaults once (per shell) to shorten commands:
bash
export PGHOST=localhost
export PGPORT=5432
export PGDATABASE=agent_memory
export PGUSER=postgres # change to your PG superuser if postgres role does not existShell copy/paste safety:
- Avoid copying inline markdown backticks (e.g. ) into your shell. In zsh, backticks are command substitution.
semantic - Prefer using the wrapper scripts (,
scripts/mem.sh) or copy from fenced code blocks.scripts/mem.ps1
请勿在脚本、技能文档或配置文件中硬编码密码。
非交互式的推荐选项:
psql- /
.pgpass(推荐)pgpass.conf- Linux/macOS:(必须设置
~/.pgpass,否则libpq会忽略它)chmod 0600 ~/.pgpass - Windows:(示例:
%APPDATA%\postgresql\pgpass.conf)C:\Users\<you>\AppData\Roaming\postgresql\pgpass.conf - 格式:
hostname:port:database:username:password - 文档:https://www.postgresql.org/docs/current/libpq-pgpass.html
- Linux/macOS:
- (可选覆盖):指向密码文件的自定义位置
PGPASSFILE - (不推荐):仅用于快速本地测试;环境变量在某些系统中可能会泄露
PGPASSWORD
提示:一次性设置连接默认值(每个会话)以缩短命令:
bash
export PGHOST=localhost
export PGPORT=5432
export PGDATABASE=agent_memory
export PGUSER=postgres # 如果postgres角色不存在,请更改为你的PG超级用户Shell复制粘贴安全提示:
- 避免将内联反引号(例如)复制到Shell中。在zsh中,反引号表示命令替换。
semantic - 优先使用包装脚本(、
scripts/mem.sh)或从代码块中复制。scripts/mem.ps1
One-time setup helper scripts
一次性设置辅助脚本
This skill ships helper scripts (relative paths):
scripts/setup-pgpass.ps1scripts/setup-pgpass.sh
OpenCode usage: run them from the skill directory.
Windows run:
powershell
powershell.exe -NoProfile -ExecutionPolicy Bypass -File "scripts\setup-pgpass.ps1"Linux/macOS run:
bash
bash "scripts/setup-pgpass.sh"本技能附带辅助脚本(相对路径):
scripts/setup-pgpass.ps1scripts/setup-pgpass.sh
OpenCode使用方法:从技能目录运行它们。
Windows运行:
powershell
powershell.exe -NoProfile -ExecutionPolicy Bypass -File "scripts\setup-pgpass.ps1"Linux/macOS运行:
bash
bash "scripts/setup-pgpass.sh"Memory Types
内存类型
| Type | Lifespan | Use When |
|---|---|---|
| 24h auto-expire | Current conversation context (requires |
| Permanent + decay | Problem-solving experiences, debugging sessions |
| Permanent | Extracted facts, knowledge, patterns |
| Permanent | Step-by-step procedures, checklists (importance >= 7) |
| 类型 | 生命周期 | 使用场景 |
|---|---|---|
| 24小时自动过期 | 当前对话上下文(需要 |
| 永久存储+衰减 | 问题解决经验、调试会话 |
| 永久存储 | 提取的事实、知识、模式 |
| 永久存储 | 分步流程、检查清单(重要性≥7) |
Core Functions
核心函数
store_memory(type, category, tags[], title, content, metadata, agent_id, session_id, importance)
store_memory(type, category, tags[], title, content, metadata, agent_id, session_id, importance)store_memory(type, category, tags[], title, content, metadata, agent_id, session_id, importance)
store_memory(type, category, tags[], title, content, metadata, agent_id, session_id, importance)Auto-deduplicates by content hash. Duplicate inserts bump and .
access_countimportance_scoresql
SELECT store_memory(
'semantic',
'windows-networking',
ARRAY['ssh', 'tunnel', 'port-conflict'],
'SSH Tunnel Port Conflict Resolution',
'Fix: 1) taskkill /F /IM ssh.exe 2) Use processId not pid 3) Wait 3s',
'{"os": "Windows 11"}',
'sisyphus',
NULL,
9.0
);通过内容哈希自动去重。重复插入会增加和。
access_countimportance_scoresql
SELECT store_memory(
'semantic',
'windows-networking',
ARRAY['ssh', 'tunnel', 'port-conflict'],
'SSH Tunnel Port Conflict Resolution',
'Fix: 1) taskkill /F /IM ssh.exe 2) Use processId not pid 3) Wait 3s',
'{"os": "Windows 11"}',
'sisyphus',
NULL,
9.0
);Wrapper: scripts/mem.sh
/ scripts/mem.ps1
(recommended)
scripts/mem.shscripts/mem.ps1包装脚本:scripts/mem.sh
/ scripts/mem.ps1
(推荐)
scripts/mem.shscripts/mem.ps1These wrappers reduce enum/quoting mistakes:
bash
undefined这些包装脚本减少枚举/引用错误:
bash
undefinedshow allowed enum values
显示允许的枚举值
bash "scripts/mem.sh" types
bash "scripts/mem.sh" types
search
搜索
bash "scripts/mem.sh" search "pgvector windows install" 5
bash "scripts/mem.sh" search "pgvector windows install" 5
store (content via stdin)
存储(内容通过标准输入传递)
printf '%s' "Steps: ..." | bash "scripts/mem.sh" store semantic project "pgvector install" "postgres,pgvector,windows" 8
```powershell
powershell.exe -NoProfile -ExecutionPolicy Bypass -File "scripts\mem.ps1" types
powershell.exe -NoProfile -ExecutionPolicy Bypass -File "scripts\mem.ps1" search "pgvector windows install" 5
"Steps: ..." | powershell.exe -NoProfile -ExecutionPolicy Bypass -File "scripts\mem.ps1" store semantic project "pgvector install" "postgres,pgvector,windows" 8printf '%s' "Steps: ..." | bash "scripts/mem.sh" store semantic project "pgvector install" "postgres,pgvector,windows" 8
```powershell
powershell.exe -NoProfile -ExecutionPolicy Bypass -File "scripts\mem.ps1" types
powershell.exe -NoProfile -ExecutionPolicy Bypass -File "scripts\mem.ps1" search "pgvector windows install" 5
"Steps: ..." | powershell.exe -NoProfile -ExecutionPolicy Bypass -File "scripts\mem.ps1" store semantic project "pgvector install" "postgres,pgvector,windows" 8Router Integration (optional)
路由器集成(可选)
If you use a Router skill that executes pinned pipelines, it can read a manifest embedded in this .
SKILL.mdFor portability, the manifest block is fenced as YAML but the content is JSON (valid YAML). The Router parses it.
skill
{
"schema_version": "2.0",
"id": "skill-system-memory",
"version": "0.2.0",
"capabilities": ["memory-search", "memory-store", "memory-health", "memory-types"],
"effects": ["proc.exec", "db.read", "db.write"],
"operations": {
"search": {
"description": "Search memories by natural language query. Returns ranked results with relevance scores.",
"input": {
"query": { "type": "string", "required": true, "description": "Natural language search query" },
"limit": { "type": "integer", "required": false, "default": 5, "description": "Max results" }
},
"output": {
"description": "Array of memory matches with id, title, content, relevance_score",
"fields": { "status": "ok | error", "data": "array of {id, title, content, relevance_score}" }
},
"entrypoints": {
"unix": ["bash", "scripts/router_mem.sh", "search", "{query}", "{limit}"],
"windows": ["powershell.exe", "-NoProfile", "-ExecutionPolicy", "Bypass", "-File", "scripts\\router_mem.ps1", "search", "{query}", "{limit}"]
}
},
"store": {
"description": "Store a new memory. Auto-deduplicates by content hash.",
"input": {
"memory_type": { "type": "string", "required": true, "description": "One of: semantic, episodic, procedural, working" },
"category": { "type": "string", "required": true, "description": "Category name" },
"title": { "type": "string", "required": true, "description": "One-line summary" },
"tags_csv": { "type": "string", "required": true, "description": "Comma-separated tags" },
"importance": { "type": "integer", "required": true, "description": "1-10 importance score" }
},
"output": {
"description": "Confirmation with stored memory id",
"fields": { "status": "ok | error", "id": "integer" }
},
"entrypoints": {
"unix": ["bash", "scripts/router_mem.sh", "store", "{memory_type}", "{category}", "{title}", "{tags_csv}", "{importance}"],
"windows": ["powershell.exe", "-NoProfile", "-ExecutionPolicy", "Bypass", "-File", "scripts\\router_mem.ps1", "store", "{memory_type}", "{category}", "{title}", "{tags_csv}", "{importance}"]
}
},
"health": {
"description": "Check memory system health: total count, average importance, stale count.",
"input": {},
"output": {
"description": "Health metrics",
"fields": { "status": "ok | error", "data": "array of {metric, value, status}" }
},
"entrypoints": {
"unix": ["bash", "scripts/router_mem.sh", "health"],
"windows": ["powershell.exe", "-NoProfile", "-ExecutionPolicy", "Bypass", "-File", "scripts\\router_mem.ps1", "health"]
}
},
"types": {
"description": "List available memory types and their descriptions.",
"input": {},
"output": {
"description": "Memory type definitions",
"fields": { "status": "ok | error", "data": "array of {type, lifespan, description}" }
},
"entrypoints": {
"unix": ["bash", "scripts/router_mem.sh", "types"],
"windows": ["powershell.exe", "-NoProfile", "-ExecutionPolicy", "Bypass", "-File", "scripts\\router_mem.ps1", "types"]
}
}
},
"stdout_contract": {
"last_line_json": true
}
}Notes:
- The Router expects each step to print last-line JSON.
- These Router adapter scripts are separate from /
mem.shto avoid breaking existing workflows.mem.ps1
如果你使用执行固定流水线的Router技能,它可以读取本中嵌入的清单。
SKILL.md为了可移植性,清单块以YAML格式 fenced,但内容是JSON(有效的YAML)。Router会解析它。
skill
{
"schema_version": "2.0",
"id": "skill-system-memory",
"version": "0.2.0",
"capabilities": ["memory-search", "memory-store", "memory-health", "memory-types"],
"effects": ["proc.exec", "db.read", "db.write"],
"operations": {
"search": {
"description": "Search memories by natural language query. Returns ranked results with relevance scores.",
"input": {
"query": { "type": "string", "required": true, "description": "Natural language search query" },
"limit": { "type": "integer", "required": false, "default": 5, "description": "Max results" }
},
"output": {
"description": "Array of memory matches with id, title, content, relevance_score",
"fields": { "status": "ok | error", "data": "array of {id, title, content, relevance_score}" }
},
"entrypoints": {
"unix": ["bash", "scripts/router_mem.sh", "search", "{query}", "{limit}"],
"windows": ["powershell.exe", "-NoProfile", "-ExecutionPolicy", "Bypass", "-File", "scripts\\router_mem.ps1", "search", "{query}", "{limit}"]
}
},
"store": {
"description": "Store a new memory. Auto-deduplicates by content hash.",
"input": {
"memory_type": { "type": "string", "required": true, "description": "One of: semantic, episodic, procedural, working" },
"category": { "type": "string", "required": true, "description": "Category name" },
"title": { "type": "string", "required": true, "description": "One-line summary" },
"tags_csv": { "type": "string", "required": true, "description": "Comma-separated tags" },
"importance": { "type": "integer", "required": true, "description": "1-10 importance score" }
},
"output": {
"description": "Confirmation with stored memory id",
"fields": { "status": "ok | error", "id": "integer" }
},
"entrypoints": {
"unix": ["bash", "scripts/router_mem.sh", "store", "{memory_type}", "{category}", "{title}", "{tags_csv}", "{importance}"],
"windows": ["powershell.exe", "-NoProfile", "-ExecutionPolicy", "Bypass", "-File", "scripts\\router_mem.ps1", "store", "{memory_type}", "{category}", "{title}", "{tags_csv}", "{importance}"]
}
},
"health": {
"description": "Check memory system health: total count, average importance, stale count.",
"input": {},
"output": {
"description": "Health metrics",
"fields": { "status": "ok | error", "data": "array of {metric, value, status}" }
},
"entrypoints": {
"unix": ["bash", "scripts/router_mem.sh", "health"],
"windows": ["powershell.exe", "-NoProfile", "-ExecutionPolicy", "Bypass", "-File", "scripts\\router_mem.ps1", "health"]
}
},
"types": {
"description": "List available memory types and their descriptions.",
"input": {},
"output": {
"description": "Memory type definitions",
"fields": { "status": "ok | error", "data": "array of {type, lifespan, description}" }
},
"entrypoints": {
"unix": ["bash", "scripts/router_mem.sh", "types"],
"windows": ["powershell.exe", "-NoProfile", "-ExecutionPolicy", "Bypass", "-File", "scripts\\router_mem.ps1", "types"]
}
}
},
"stdout_contract": {
"last_line_json": true
}
}注意:
- Router期望每个步骤输出最后一行JSON。
- 这些Router适配器脚本与/
mem.sh分离,以避免破坏现有工作流。mem.ps1
Visualize Memories (Markdown export)
可视化内存(Markdown导出)
If querying PostgreSQL is too inconvenient for daily use, you can export memories into markdown files under (current directory by default):
./Memory/bash
bash "<skill-dir>/scripts/sync_memory_to_md.sh" --out-dir "./Memory"Outputs:
- (semantic + procedural)
Memory/Long.md - (procedural only)
Memory/Procedural.md - (friction + compaction-daily + procedural highlights)
Memory/Short.md - (episodic)
Memory/Episodic.md
Backups:
- Backups are stored under to avoid noisy
Memory/.backups/.git status - Use to disable.
--no-backup
The sync script will also create if it doesn't exist (ignores and ).
Memory/.gitignore.backups/SYNC_STATUS.txtLong index:
- includes an
Memory/Long.mdsection (top categories + tags) to make the export browsable.Index
如果日常使用中查询PostgreSQL太不方便,你可以将内存导出到目录下的Markdown文件(默认当前目录):
./Memory/bash
bash "<skill-dir>/scripts/sync_memory_to_md.sh" --out-dir "./Memory"输出:
- (语义+流程)
Memory/Long.md - (仅流程)
Memory/Procedural.md - (摩擦+每日压缩+流程要点)
Memory/Short.md - (情景)
Memory/Episodic.md
备份:
- 备份存储在目录下,避免干扰
Memory/.backups/。git status - 使用参数禁用备份。
--no-backup
同步脚本还会在不存在时创建它(忽略和)。
Memory/.gitignore.backups/SYNC_STATUS.txt长索引:
- 包含
Memory/Long.md部分(顶级分类+标签),使导出内容便于浏览。Index
search_memories(query, types[], categories[], tags[], agent_id, min_importance, limit)
search_memories(query, types[], categories[], tags[], agent_id, min_importance, limit)search_memories(query, types[], categories[], tags[], agent_id, min_importance, limit)
search_memories(query, types[], categories[], tags[], agent_id, min_importance, limit)Hybrid search: full-text (tsvector) + trigram similarity (pg_trgm) + tag filtering.
Accepts plain English queries — no tsquery syntax needed.
Relevance scoring: .
text_score * decay * recency * importancesql
-- Natural language
SELECT * FROM search_memories('ssh tunnel port conflict', NULL, NULL, NULL, NULL, 7.0, 5);
-- Filter by type + tags
SELECT * FROM search_memories(
'troubleshooting steps',
ARRAY['procedural']::memory_type[],
NULL,
ARRAY['ssh'],
NULL, 0.0, 5
);Returns:
Where is one of: , , , .
id, memory_type, category, title, content, importance_score, relevance_score, match_typematch_typefulltexttrigram_titletrigram_contentmetadata混合搜索:全文检索(tsvector)+ trigram相似度(pg_trgm)+标签过滤。
接受纯英文查询——无需tsquery语法。
相关性评分:。
text_score * decay * recency * importancesql
-- 自然语言查询
SELECT * FROM search_memories('ssh tunnel port conflict', NULL, NULL, NULL, NULL, 7.0, 5);
-- 按类型+标签过滤
SELECT * FROM search_memories(
'troubleshooting steps',
ARRAY['procedural']::memory_type[],
NULL,
ARRAY['ssh'],
NULL, 0.0, 5
);返回:
其中为以下之一:, , , 。
id, memory_type, category, title, content, importance_score, relevance_score, match_typematch_typefulltexttrigram_titletrigram_contentmetadatamemory_health_check()
memory_health_check()memory_health_check()
memory_health_check()Returns: for , , .
metric | value | statustotal_memoriesavg_importancestale_count返回:,包含, , 。
metric | value | statustotal_memoriesavg_importancestale_countapply_memory_decay()
apply_memory_decay()apply_memory_decay()
apply_memory_decay()Decays episodic memories by . Run daily.
0.9999^days_since_access按衰减情景内存。每日运行。
0.9999^days_since_accessprune_stale_memories(age_days, max_importance, max_access_count)
prune_stale_memories(age_days, max_importance, max_access_count)prune_stale_memories(age_days, max_importance, max_access_count)
prune_stale_memories(age_days, max_importance, max_access_count)Soft-deletes old episodic memories below thresholds. Default: 180 days, importance <= 3, never accessed.
软删除低于阈值的旧情景内存。默认值:180天,重要性≤3,从未访问。
Agent Workflow
Agent工作流
Before a task
任务开始前
sql
SELECT id, title, content, relevance_score
FROM search_memories('keywords from user request', NULL, NULL, NULL, NULL, 5.0, 5);If relevant memories found, reference them: "Based on past experience (memory #1)..."
sql
SELECT id, title, content, relevance_score
FROM search_memories('用户请求中的关键词', NULL, NULL, NULL, NULL, 5.0, 5);如果找到相关内存,引用它们:"基于过往经验(内存#1)..."
After solving a problem
问题解决后
sql
SELECT store_memory(
'semantic',
'category-name',
ARRAY['tag1', 'tag2', 'tag3'],
'One-line problem summary',
'Detailed problem + solution',
'{"os": "...", "tools": [...]}',
'agent-name',
NULL,
8.0
);sql
SELECT store_memory(
'semantic',
'分类名称',
ARRAY['标签1', '标签2', '标签3'],
'一行问题摘要',
'详细问题+解决方案',
'{"os": "...", "tools": [...]}',
'agent名称',
NULL,
8.0
);When delegating to subagents
委托给子Agent时
Include in prompt:
MUST DO FIRST:
Search agent_memories: SELECT * FROM search_memories('relevant keywords', NULL, NULL, NULL, NULL, 5.0, 5);
MUST DO AFTER:
If you solved something new, store it with store_memory(...)在提示词中包含:
必须首先执行:
搜索agent_memories:SELECT * FROM search_memories('相关关键词', NULL, NULL, NULL, NULL, 5.0, 5);
必须最后执行:
如果你解决了新问题,请使用store_memory(...)存储它Task Memory Layer (optional)
任务内存层(可选)
This skill also ships a minimal task/issue layer inspired by Beads: graph semantics + deterministic "ready work" queries.
Objects:
- : tasks (status, priority, assignee)
agent_tasks - : typed links (
task_links,blocks,parent_child, etc.)related - : materialized cache to make ready queries fast
blocked_tasks_cache - : link tasks to memories (
task_memory_links) for outcomes/notesagent_memories
Create tasks:
sql
INSERT INTO agent_tasks(title, description, created_by, priority)
VALUES ('Install pgvector', 'Windows build + enable extension', 'user', 1);Add dependencies:
sql
-- Task 1 blocks task 2
INSERT INTO task_links(from_task_id, to_task_id, link_type)
VALUES (1, 2, 'blocks');
-- Task 2 is parent of task 3 (used for transitive blocking)
INSERT INTO task_links(from_task_id, to_task_id, link_type)
VALUES (2, 3, 'parent_child');Rebuild blocked cache (usually auto via triggers):
sql
SELECT rebuild_blocked_tasks_cache();Ready work query:
sql
SELECT id, title, priority
FROM agent_tasks t
WHERE t.deleted_at IS NULL
AND t.status IN ('open','in_progress')
AND NOT EXISTS (SELECT 1 FROM blocked_tasks_cache b WHERE b.task_id = t.id)
ORDER BY priority ASC, updated_at ASC
LIMIT 50;Claim a task (atomic):
sql
SELECT claim_task(2, 'agent-1');Link a task to a memory:
sql
INSERT INTO task_memory_links(task_id, memory_id, link_type)
VALUES (2, 123, 'outcome');Optional add-on: (not implemented yet)
conditional_blocks- This is intentionally deferred until the core workflow feels solid.
- If you need it now, store a condition in (e.g.,
task_links.metadata) and treat it as documentation.{ "os": "windows" }
本技能还附带一个受Beads启发的极简任务/问题层:图语义+确定性"就绪工作"查询。
对象:
- :任务(状态、优先级、经办人)
agent_tasks - :类型化链接(
task_links,blocks,parent_child等)related - :物化缓存,使就绪查询更快
blocked_tasks_cache - :将任务与内存(
task_memory_links)关联,用于结果/备注agent_memories
创建任务:
sql
INSERT INTO agent_tasks(title, description, created_by, priority)
VALUES ('Install pgvector', 'Windows build + enable extension', 'user', 1);添加依赖:
sql
-- 任务1阻塞任务2
INSERT INTO task_links(from_task_id, to_task_id, link_type)
VALUES (1, 2, 'blocks');
-- 任务2是任务3的父任务(用于传递阻塞)
INSERT INTO task_links(from_task_id, to_task_id, link_type)
VALUES (2, 3, 'parent_child');重建阻塞缓存(通常通过触发器自动完成):
sql
SELECT rebuild_blocked_tasks_cache();就绪工作查询:
sql
SELECT id, title, priority
FROM agent_tasks t
WHERE t.deleted_at IS NULL
AND t.status IN ('open','in_progress')
AND NOT EXISTS (SELECT 1 FROM blocked_tasks_cache b WHERE b.task_id = t.id)
ORDER BY priority ASC, updated_at ASC
LIMIT 50;认领任务(原子操作):
sql
SELECT claim_task(2, 'agent-1');将任务与内存关联:
sql
INSERT INTO task_memory_links(task_id, memory_id, link_type)
VALUES (2, 123, 'outcome');可选附加功能:(尚未实现)
conditional_blocks- 此功能故意推迟,直到核心工作流稳定。
- 如果你现在需要它,可以在中存储条件(例如
task_links.metadata),并将其视为文档。{ "os": "windows" }
Wrapper scripts (recommended)
包装脚本(推荐)
To avoid re-typing SQL, use the wrapper scripts shipped with this skill:
Windows:
powershell
powershell.exe -NoProfile -ExecutionPolicy Bypass -File "scripts\tasks.ps1" ready 50
powershell.exe -NoProfile -ExecutionPolicy Bypass -File "scripts\tasks.ps1" create "Install pgvector" 1
powershell.exe -NoProfile -ExecutionPolicy Bypass -File "scripts\tasks.ps1" claim 2 agent-1Linux/macOS:
bash
bash "scripts/tasks.sh" ready 50
bash "scripts/tasks.sh" create "Install pgvector" 1
bash "scripts/tasks.sh" claim 2 agent-1为避免重复输入SQL,请使用本技能附带的包装脚本:
Windows:
powershell
powershell.exe -NoProfile -ExecutionPolicy Bypass -File "scripts\tasks.ps1" ready 50
powershell.exe -NoProfile -ExecutionPolicy Bypass -File "scripts\tasks.ps1" create "Install pgvector" 1
powershell.exe -NoProfile -ExecutionPolicy Bypass -File "scripts\tasks.ps1" claim 2 agent-1Linux/macOS:
bash
bash "scripts/tasks.sh" ready 50
bash "scripts/tasks.sh" create "Install pgvector" 1
bash "scripts/tasks.sh" claim 2 agent-1Compaction Log (high value)
压缩日志(高价值)
Compaction can delete context. Treat every compaction as an important event and record it.
If you're using OpenCode, prefer the OpenCode plugin route for automatic compaction logging.
压缩可能会丢失上下文。将每次压缩视为重要事件并记录下来。
如果你使用OpenCode,优先通过OpenCode插件实现自动压缩日志。
OpenCode plugin (experimental.session.compacting)
OpenCode插件(experimental.session.compacting)
- Copy to
plugins/skill-system-memory.js~/.config/opencode/plugins/ - Restart OpenCode
It writes local compaction events to:
~/.config/opencode/skill-system-memory/compaction-events.jsonl
And will also attempt a best-effort Postgres write (requires pgpass).
store_memory(...)- 将复制到
plugins/skill-system-memory.js~/.config/opencode/plugins/ - 重启OpenCode
它会将本地压缩事件写入:
~/.config/opencode/skill-system-memory/compaction-events.jsonl
并且还会尝试尽最大努力写入Postgres的(需要pgpass)。
store_memory(...)Verify
验证
sql
SELECT id, title, relevance_score
FROM search_memories('compaction', NULL, NULL, NULL, NULL, 0, 10);If nothing is inserted, set up / so can authenticate without prompting.
.pgpasspgpass.confpsqlsql
SELECT id, title, relevance_score
FROM search_memories('compaction', NULL, NULL, NULL, NULL, 0, 10);如果没有插入任何内容,请设置 / ,使无需提示即可进行身份验证。
.pgpasspgpass.confpsqlDaily Compaction Consolidation
每日压缩合并
Raw compaction events are noisy. Run a daily consolidation job that turns many compaction events into 1 daily memory.
The consolidation scripts default to the OpenCode plugin event log path and will fall back to Claude Code paths if needed.
- OpenCode events:
~/.config/opencode/skill-system-memory-postgres/compaction-events.jsonl - Output directory:
~/.config/opencode/skill-system-memory-postgres/compaction-daily/
Windows run (manual):
powershell
powershell.exe -NoProfile -ExecutionPolicy Bypass -File "scripts\consolidate-compactions.ps1"Linux/macOS run (manual):
bash
bash "scripts/consolidate-compactions.sh"Scheduling:
- Windows Task Scheduler: create a daily task that runs the PowerShell command above
- Linux cron example:
bash
undefined原始压缩事件比较杂乱。运行每日合并任务,将多个压缩事件转换为1条每日内存。
合并脚本默认使用OpenCode插件事件日志路径,如果需要会回退到Claude Code路径。
- OpenCode事件:
~/.config/opencode/skill-system-memory-postgres/compaction-events.jsonl - 输出目录:
~/.config/opencode/skill-system-memory-postgres/compaction-daily/
Windows手动运行:
powershell
powershell.exe -NoProfile -ExecutionPolicy Bypass -File "scripts\consolidate-compactions.ps1"Linux/macOS手动运行:
bash
bash "scripts/consolidate-compactions.sh"调度:
- Windows任务计划程序:创建每日任务,运行上述PowerShell命令
- Linux cron示例:
bash
undefinedevery day at 02:10 UTC
每天UTC时间02:10运行
10 2 * * * bash "<skill-dir>/scripts/consolidate-compactions.sh" >/dev/null 2>&1
undefined10 2 * * * bash "<skill-dir>/scripts/consolidate-compactions.sh" >/dev/null 2>&1
undefinedAppendix: Claude Code compatibility (optional)
附录:Claude Code兼容性(可选)
This repository also includes Claude Code hook scripts under . They are not required for OpenCode usage.
hooks/本仓库还包含目录下的Claude Code钩子脚本。OpenCode使用无需这些脚本。
hooks/Friction Log (turn pain into tooling)
摩擦日志(将痛点转化为工具)
Whenever something is annoying, brittle, or fails:
- Store an memory with category
episodicand tags for the tool/OS/error.friction - If it repeats (2+ times), promote it to memory (importance >= 7) with a checklist.
procedural - Update this skill doc when the fix becomes a stable rule/workflow (so every agent learns it).
每当遇到恼人、脆弱或失败的情况时:
- 存储一条内存,分类为
episodic,并添加工具/操作系统/错误相关的标签。friction - 如果重复发生(2次以上),将其升级为内存(重要性≥7),并附带检查清单。
procedural - 当修复成为稳定规则/工作流时,更新本技能文档(以便所有Agent都能学习)。
Schema Overview
架构概述
agent_memoriesmemory_linksworking_memoryKey columns: , , , , , (auto), (JSONB), , , , (auto).
memory_typecategorytags[]titlecontentcontent_hashmetadataimportance_scoreaccess_countrelevance_decaysearch_vectoragent_memoriesmemory_linksworking_memory关键列:, , , , , (自动), (JSONB), , , , (自动)。
memory_typecategorytags[]titlecontentcontent_hashmetadataimportance_scoreaccess_countrelevance_decaysearch_vectorAnti-Patterns
反模式
| Don't | Do Instead |
|---|---|
| Store everything | Only store non-obvious solutions |
| Skip tags | Tag comprehensively: tech, error codes, platform |
Use | |
| One type for all data | Use correct memory_type per content |
| Forget importance rating | Rate honestly: 9-10 battle-tested, 5-6 partial |
| 不要做 | 应该做 |
|---|---|
| 存储所有内容 | 仅存储非显而易见的解决方案 |
| 跳过标签 | 全面添加标签:技术、错误代码、平台 |
直接使用 | |
| 所有数据使用一种类型 | 根据内容使用正确的memory_type |
| 忘记重要性评分 | 如实评分:9-10为经过实战检验,5-6为部分有效 |
Sharp Edges
注意事项
| Issue | Severity | Mitigation |
|---|---|---|
| Chunks lose context | Critical | Store full problem+solution as one unit |
| Old tech memories | High | |
| Duplicate memories | Medium | |
| No vector search | Info | pg_trgm provides fuzzy matching; pgvector can be added later |
| 问题 | 严重程度 | 缓解措施 |
|---|---|---|
| 块丢失上下文 | 严重 | 将完整的问题+解决方案作为一个单元存储 |
| 过时技术内存 | 高 | 每日运行 |
| 重复内存 | 中 | |
| 无向量搜索 | 信息 | pg_trgm提供模糊匹配;可稍后添加pgvector |
Cross-Platform Notes
跨平台说明
- PostgreSQL 14-18 supported (no partitioning, no GENERATED ALWAYS)
- pg_trgm is the only required extension (ships with all PG distributions)
- Linux:
psql -U postgres -d agent_memory -f init.sql - Windows: Use full path to psql.exe or add PG bin to PATH
- MCP postgres_query: Works for read operations; DDL requires psql
- 支持PostgreSQL 14-18(无分区,无GENERATED ALWAYS)
- pg_trgm是唯一必需的扩展(随所有PG发行版提供)
- Linux:
psql -U postgres -d agent_memory -f init.sql - Windows:使用psql.exe的完整路径或将PG bin目录添加到PATH
- MCP postgres_query:适用于读取操作;DDL需要psql
Maintenance
维护
sql
SELECT apply_memory_decay(); -- daily
SELECT prune_stale_memories(180, 3.0, 0); -- monthly
DELETE FROM working_memory WHERE expires_at < NOW(); -- daily
SELECT * FROM memory_health_check(); -- anytimesql
SELECT apply_memory_decay(); -- 每日
SELECT prune_stale_memories(180, 3.0, 0); -- 每月
DELETE FROM working_memory WHERE expires_at < NOW(); -- 每日
SELECT * FROM memory_health_check(); -- 随时Optional: pgvector Semantic Search
可选:pgvector语义搜索
If pgvector is installed on your PostgreSQL server, will:
init.sql- create extension (non-fatal if missing)
vector - add (variable dimension)
agent_memories.embedding vector - create
search_memories_vector(p_embedding, p_embedding_dim, ...)
Notes:
- This does NOT generate embeddings. You must populate yourself.
agent_memories.embedding - Once embeddings exist, you can do nearest-neighbor search:
sql
-- p_embedding is a pgvector literal; pass it from your app.
-- Optionally filter by dimension (recommended when using multiple models).
SELECT id, title, similarity
FROM search_memories_vector('[0.01, 0.02, ...]'::vector, 768, NULL, NULL, NULL, NULL, 0.0, 10);Note: variable-dimension vectors cannot be indexed with pgvector indexes. This is a tradeoff to support local models with different embedding sizes.
If pgvector is not installed, everything else still works (fts + pg_trgm).
如果你的PostgreSQL服务器安装了pgvector,将:
init.sql- 创建扩展(如果缺失不会导致致命错误)
vector - 添加(可变维度)
agent_memories.embedding vector - 创建
search_memories_vector(p_embedding, p_embedding_dim, ...)
注意:
- 本技能不会生成嵌入向量。你必须自行填充。
agent_memories.embedding - 嵌入向量存在后,你可以进行最近邻搜索:
sql
-- p_embedding是pgvector字面量;从你的应用程序传递。
-- 可选按维度过滤(当使用多个模型时推荐)。
SELECT id, title, similarity
FROM search_memories_vector('[0.01, 0.02, ...]'::vector, 768, NULL, NULL, NULL, NULL, 0.0, 10);注意:可变维度向量无法使用pgvector索引。这是为了支持不同嵌入大小的本地模型而做出的权衡。
如果未安装pgvector,其他功能仍可正常工作(全文检索+pg_trgm)。
Embedding Ingestion Pipeline
嵌入向量摄入流水线
pgvector search only works after you populate .
agent_memories.embeddingThis skill ships ingestion scripts (relative paths). Run from the skill directory:
scripts/ingest-embeddings.ps1scripts/ingest-embeddings.sh
They:
- find memories with
embedding IS NULL - call an OpenAI-compatible embeddings endpoint (including Ollama)
- write vectors into
agent_memories.embedding vector
Requirements:
- pgvector installed + applied (so
init.sqlexists)agent_memories.embedding - /
.pgpassconfigured (sopgpass.confcan write without prompting)psql -w - env vars for embedding API:
- (
EMBEDDING_PROVIDERorollama; defaultopenai)openai - (required for
EMBEDDING_API_KEY; optional foropenai)ollama - (default depends on provider)
EMBEDDING_API_URL - (default depends on provider)
EMBEDDING_MODEL - (optional; forwarded to the embeddings endpoint when supported)
EMBEDDING_DIMENSIONS
Windows example:
powershell
$env:EMBEDDING_PROVIDER = "ollama"
$env:EMBEDDING_MODEL = "nomic-embed-text"
powershell.exe -NoProfile -ExecutionPolicy Bypass -File "scripts\ingest-embeddings.ps1" -Limit 25Linux/macOS example:
bash
export EMBEDDING_API_KEY=...
export EMBEDDING_MODEL=text-embedding-3-small
bash "scripts/ingest-embeddings.sh"Scheduling:
- run daily (or hourly) after you add new memories
- keep small until you trust it
Limit
Robustness note:
- On Windows, very long SQL strings can be fragile when passed via . The ingestion script writes per-row updates to a temporary
psql -cfile and runs.sqlto avoid command-line length/quoting edge cases.psql -f
只有填充后,pgvector搜索才能工作。
agent_memories.embedding本技能附带摄入脚本(相对路径)。从技能目录运行:
scripts/ingest-embeddings.ps1scripts/ingest-embeddings.sh
它们:
- 查找的内存
embedding IS NULL - 调用兼容OpenAI的嵌入向量端点(包括Ollama)
- 将向量写入
agent_memories.embedding vector
要求:
- 已安装pgvector并应用(以便
init.sql存在)agent_memories.embedding - 已配置/
.pgpass(以便pgpass.conf无需提示即可写入)psql -w - 嵌入API的环境变量:
- (
EMBEDDING_PROVIDER或ollama;默认openai)openai - (
EMBEDDING_API_KEY必需;openai可选)ollama - (默认值取决于提供商)
EMBEDDING_API_URL - (默认值取决于提供商)
EMBEDDING_MODEL - (可选;支持时转发到嵌入向量端点)
EMBEDDING_DIMENSIONS
Windows示例:
powershell
$env:EMBEDDING_PROVIDER = "ollama"
$env:EMBEDDING_MODEL = "nomic-embed-text"
powershell.exe -NoProfile -ExecutionPolicy Bypass -File "scripts\ingest-embeddings.ps1" -Limit 25Linux/macOS示例:
bash
export EMBEDDING_API_KEY=...
export EMBEDDING_MODEL=text-embedding-3-small
bash "scripts/ingest-embeddings.sh"调度:
- 添加新内存后每日(或每小时)运行
- 在信任它之前,将设置为较小值
Limit
健壮性说明:
- 在Windows上,非常长的SQL字符串通过传递时可能不稳定。摄入脚本会将每行更新写入临时
psql -c文件,并运行.sql以避免命令行长度/引用的边缘情况。psql -f
Related Skills
相关技能
systematic-debuggingpostgres-propostgresql-table-designsystematic-debuggingpostgres-propostgresql-table-design