skill-system-memory

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

Skill System Memory (PostgreSQL)

技能系统内存(PostgreSQL)

Persistent shared memory for all AI agents. PostgreSQL 14+ on Linux or Windows. Memory failures look like intelligence failures — this skill ensures the right memory is retrieved at the right time.
面向所有AI Agent的持久化共享内存,支持Linux或Windows系统上的PostgreSQL 14及以上版本。 内存故障会表现为智能失效——本技能确保在正确的时间检索到正确的内存。

Quick Start

快速开始

Database
agent_memory
and all functions are created by
init.sql
in this skill directory.
bash
undefined
数据库
agent_memory
及所有函数由本技能目录下的
init.sql
创建。
bash
undefined

Linux — replace 'postgres' with your PostgreSQL superuser if different (e.g. your system username)

Linux — 如果你的PostgreSQL超级用户不是postgres,请替换(例如你的系统用户名)

psql -U postgres -c "CREATE DATABASE agent_memory;" psql -U postgres -d agent_memory -f init.sql
psql -U postgres -c "CREATE DATABASE agent_memory;" psql -U postgres -d agent_memory -f init.sql

Windows (adjust path to your psql.exe; replace 'postgres' with your PG superuser if needed)

Windows(调整psql.exe的路径;如果需要,替换postgres为你的PG超级用户)

& "C:\Program Files\PostgreSQL\18\bin\psql.exe" -U postgres -c "CREATE DATABASE agent_memory;" & "C:\Program Files\PostgreSQL\18\bin\psql.exe" -U postgres -d agent_memory -f init.sql

> **Note**: If your PostgreSQL installation does not have a `postgres` role, use your actual
> PostgreSQL superuser name. On many Linux distros this matches your OS username.
> You can override at any time by setting `PGUSER` before running scripts:
> `export PGUSER=your_pg_username` (Linux/macOS) or `$env:PGUSER = "your_pg_username"` (PowerShell).

Verify: `SELECT * FROM memory_health_check();`
& "C:\Program Files\PostgreSQL\18\bin\psql.exe" -U postgres -c "CREATE DATABASE agent_memory;" & "C:\Program Files\PostgreSQL\18\bin\psql.exe" -U postgres -d agent_memory -f init.sql

> **注意**:如果你的PostgreSQL安装没有`postgres`角色,请使用实际的PostgreSQL超级用户名。在许多Linux发行版中,该用户名与你的系统用户名一致。
> 你可以随时通过在运行脚本前设置`PGUSER`来覆盖:
> `export PGUSER=your_pg_username`(Linux/macOS)或`$env:PGUSER = "your_pg_username"`(PowerShell)。

验证:`SELECT * FROM memory_health_check();`

Pure Skill Mode (default)

纯技能模式(默认)

This skill works without installing any plugin. In pure skill mode:
  • you manually run scripts when you want (progressive disclosure)
  • no global OpenCode config is modified automatically
本技能无需安装任何插件即可运行。在纯技能模式下:
  • 你可以在需要时手动运行脚本(渐进式披露)
  • 不会自动修改全局OpenCode配置

Optional bootstrap (asks + records choices + tries to install)

可选引导(询问并记录选择+尝试安装)

Notes:
  • Interactive mode defaults to NOT installing heavy optional components.
  • Use
    -InstallAll
    /
    --install-all
    only when you're ready to install everything.
Run the bootstrap script to choose optional components (pgpass, local embeddings, pgvector) and record decisions.
Bootstrap can also optionally install the OpenCode compaction logging plugin (it will copy the plugin into your OpenCode plugins directory).
Windows:
powershell
undefined
注意:
  • 交互模式默认不安装大型可选组件。
  • 仅当你准备好安装所有组件时,才使用
    -InstallAll
    /
    --install-all
    参数。
运行引导脚本以选择可选组件(pgpass、本地嵌入、pgvector)并记录决策。
引导脚本还可以选择安装OpenCode压缩日志插件(它会将插件复制到你的OpenCode插件目录)。
Windows:
powershell
undefined

run from the skill directory

从技能目录运行

powershell.exe -NoProfile -ExecutionPolicy Bypass -File "scripts\bootstrap.ps1"

Linux/macOS:

```bash
powershell.exe -NoProfile -ExecutionPolicy Bypass -File "scripts\bootstrap.ps1"

Linux/macOS:

```bash

run from the skill directory

从技能目录运行

bash "scripts/bootstrap.sh"

The selection record is stored at:

- `~/.config/opencode/skill-system-memory/setup.json`

Agent rule:

- If this file does not exist, ask the user if they want to enable optional components.
- Recommended: run bootstrap with all options enabled (then fix any failures it reports).

On Windows, pgvector installation follows the official pgvector instructions (Visual Studio C++ + `nmake /F Makefile.win`). The bootstrap will attempt to install prerequisites via `winget`.
bash "scripts/bootstrap.sh"

选择记录存储在:

- `~/.config/opencode/skill-system-memory/setup.json`

Agent规则:

- 如果该文件不存在,询问用户是否要启用可选组件。
- 推荐:启用所有选项运行引导脚本(然后修复它报告的任何故障)。

在Windows上,pgvector安装遵循官方pgvector说明(Visual Studio C++ + `nmake /F Makefile.win`)。引导脚本将尝试通过`winget`安装依赖项。

Optional automation: compaction logging (OpenCode plugin)

可选自动化:压缩日志(OpenCode插件)

If you want automatic compaction logging, install the OpenCode plugin template shipped with this skill.
Option A (recommended): run bootstrap and choose the plugin option.
  1. Copy
    plugins/skill-system-memory.js
    to
    ~/.config/opencode/plugins/
  2. Restart OpenCode
Safety / rollback (if OpenCode gets stuck on startup):
  • Remove or rename
    ~/.config/opencode/plugins/skill-system-memory.js
  • Restart OpenCode
  • Check logs:
    • macOS/Linux:
      ~/.local/share/opencode/log/
    • Windows:
      %USERPROFILE%\.local\share\opencode\log
Plugin behavior notes:
  • The plugin is designed to be a no-op unless you explicitly enabled it via bootstrap (
    setup.json
    sets
    selected.opencode_plugin=true
    ).
  • It only attempts a Postgres write if
    selected.pgpass=true
    (avoids hanging on auth prompts).
Uninstall:
  • Remove
    ~/.config/opencode/plugins/skill-system-memory.js
  • Restart OpenCode
如果你需要自动压缩日志,请安装本技能附带的OpenCode插件模板。
选项A(推荐):运行引导脚本并选择插件选项。
  1. plugins/skill-system-memory.js
    复制到
    ~/.config/opencode/plugins/
  2. 重启OpenCode
安全/回滚(如果OpenCode启动时卡住):
  • 删除或重命名
    ~/.config/opencode/plugins/skill-system-memory.js
  • 重启OpenCode
  • 查看日志:
    • macOS/Linux:
      ~/.local/share/opencode/log/
    • Windows:
      %USERPROFILE%\.local\share\opencode\log
插件行为说明:
  • 除非你通过引导脚本明确启用(
    setup.json
    设置
    selected.opencode_plugin=true
    ),否则该插件默认无操作。
  • 仅当
    selected.pgpass=true
    时才会尝试Postgres写入(避免因身份验证提示而挂起)。
卸载:
  • 删除
    ~/.config/opencode/plugins/skill-system-memory.js
  • 重启OpenCode

Credentials (psql)

凭据(psql)

Do NOT hardcode passwords in scripts, skill docs, or config files.
Recommended options for non-interactive
psql
:
  • .pgpass
    /
    pgpass.conf
    (recommended)
    • Linux/macOS:
      ~/.pgpass
      (must be
      chmod 0600 ~/.pgpass
      or libpq will ignore it)
    • Windows:
      %APPDATA%\postgresql\pgpass.conf
      (example:
      C:\Users\<you>\AppData\Roaming\postgresql\pgpass.conf
      )
    • Format:
      hostname:port:database:username:password
    • Docs: https://www.postgresql.org/docs/current/libpq-pgpass.html
  • PGPASSFILE
    (optional override): point to a custom location for the password file
  • PGPASSWORD
    (not recommended): only for quick local testing; environment variables can leak on some systems
Tip: set connection defaults once (per shell) to shorten commands:
bash
export PGHOST=localhost
export PGPORT=5432
export PGDATABASE=agent_memory
export PGUSER=postgres   # change to your PG superuser if postgres role does not exist
Shell copy/paste safety:
  • Avoid copying inline markdown backticks (e.g.
    semantic
    ) into your shell. In zsh, backticks are command substitution.
  • Prefer using the wrapper scripts (
    scripts/mem.sh
    ,
    scripts/mem.ps1
    ) or copy from fenced code blocks.
请勿在脚本、技能文档或配置文件中硬编码密码。
非交互式
psql
的推荐选项:
  • .pgpass
    /
    pgpass.conf
    (推荐)
    • Linux/macOS:
      ~/.pgpass
      (必须设置
      chmod 0600 ~/.pgpass
      ,否则libpq会忽略它)
    • Windows:
      %APPDATA%\postgresql\pgpass.conf
      (示例:
      C:\Users\<you>\AppData\Roaming\postgresql\pgpass.conf
    • 格式:
      hostname:port:database:username:password
    • 文档:https://www.postgresql.org/docs/current/libpq-pgpass.html
  • PGPASSFILE
    (可选覆盖):指向密码文件的自定义位置
  • PGPASSWORD
    (不推荐):仅用于快速本地测试;环境变量在某些系统中可能会泄露
提示:一次性设置连接默认值(每个会话)以缩短命令:
bash
export PGHOST=localhost
export PGPORT=5432
export PGDATABASE=agent_memory
export PGUSER=postgres   # 如果postgres角色不存在,请更改为你的PG超级用户
Shell复制粘贴安全提示:
  • 避免将内联反引号(例如
    semantic
    )复制到Shell中。在zsh中,反引号表示命令替换。
  • 优先使用包装脚本(
    scripts/mem.sh
    scripts/mem.ps1
    )或从代码块中复制。

One-time setup helper scripts

一次性设置辅助脚本

This skill ships helper scripts (relative paths):
  • scripts/setup-pgpass.ps1
  • scripts/setup-pgpass.sh
OpenCode usage: run them from the skill directory.
Windows run:
powershell
powershell.exe -NoProfile -ExecutionPolicy Bypass -File "scripts\setup-pgpass.ps1"
Linux/macOS run:
bash
bash "scripts/setup-pgpass.sh"
本技能附带辅助脚本(相对路径):
  • scripts/setup-pgpass.ps1
  • scripts/setup-pgpass.sh
OpenCode使用方法:从技能目录运行它们。
Windows运行:
powershell
powershell.exe -NoProfile -ExecutionPolicy Bypass -File "scripts\setup-pgpass.ps1"
Linux/macOS运行:
bash
bash "scripts/setup-pgpass.sh"

Memory Types

内存类型

TypeLifespanUse When
working
24h auto-expireCurrent conversation context (requires
session_id
)
episodic
Permanent + decayProblem-solving experiences, debugging sessions
semantic
PermanentExtracted facts, knowledge, patterns
procedural
PermanentStep-by-step procedures, checklists (importance >= 7)
类型生命周期使用场景
working
24小时自动过期当前对话上下文(需要
session_id
episodic
永久存储+衰减问题解决经验、调试会话
semantic
永久存储提取的事实、知识、模式
procedural
永久存储分步流程、检查清单(重要性≥7)

Core Functions

核心函数

store_memory(type, category, tags[], title, content, metadata, agent_id, session_id, importance)

store_memory(type, category, tags[], title, content, metadata, agent_id, session_id, importance)

Auto-deduplicates by content hash. Duplicate inserts bump
access_count
and
importance_score
.
sql
SELECT store_memory(
    'semantic',
    'windows-networking',
    ARRAY['ssh', 'tunnel', 'port-conflict'],
    'SSH Tunnel Port Conflict Resolution',
    'Fix: 1) taskkill /F /IM ssh.exe  2) Use processId not pid  3) Wait 3s',
    '{"os": "Windows 11"}',
    'sisyphus',
    NULL,
    9.0
);
通过内容哈希自动去重。重复插入会增加
access_count
importance_score
sql
SELECT store_memory(
    'semantic',
    'windows-networking',
    ARRAY['ssh', 'tunnel', 'port-conflict'],
    'SSH Tunnel Port Conflict Resolution',
    'Fix: 1) taskkill /F /IM ssh.exe  2) Use processId not pid  3) Wait 3s',
    '{"os": "Windows 11"}',
    'sisyphus',
    NULL,
    9.0
);

Wrapper:
scripts/mem.sh
/
scripts/mem.ps1
(recommended)

包装脚本:
scripts/mem.sh
/
scripts/mem.ps1
(推荐)

These wrappers reduce enum/quoting mistakes:
bash
undefined
这些包装脚本减少枚举/引用错误:
bash
undefined

show allowed enum values

显示允许的枚举值

bash "scripts/mem.sh" types
bash "scripts/mem.sh" types

search

搜索

bash "scripts/mem.sh" search "pgvector windows install" 5
bash "scripts/mem.sh" search "pgvector windows install" 5

store (content via stdin)

存储(内容通过标准输入传递)

printf '%s' "Steps: ..." | bash "scripts/mem.sh" store semantic project "pgvector install" "postgres,pgvector,windows" 8

```powershell
powershell.exe -NoProfile -ExecutionPolicy Bypass -File "scripts\mem.ps1" types
powershell.exe -NoProfile -ExecutionPolicy Bypass -File "scripts\mem.ps1" search "pgvector windows install" 5
"Steps: ..." | powershell.exe -NoProfile -ExecutionPolicy Bypass -File "scripts\mem.ps1" store semantic project "pgvector install" "postgres,pgvector,windows" 8
printf '%s' "Steps: ..." | bash "scripts/mem.sh" store semantic project "pgvector install" "postgres,pgvector,windows" 8

```powershell
powershell.exe -NoProfile -ExecutionPolicy Bypass -File "scripts\mem.ps1" types
powershell.exe -NoProfile -ExecutionPolicy Bypass -File "scripts\mem.ps1" search "pgvector windows install" 5
"Steps: ..." | powershell.exe -NoProfile -ExecutionPolicy Bypass -File "scripts\mem.ps1" store semantic project "pgvector install" "postgres,pgvector,windows" 8

Router Integration (optional)

路由器集成(可选)

If you use a Router skill that executes pinned pipelines, it can read a manifest embedded in this
SKILL.md
.
For portability, the manifest block is fenced as YAML but the content is JSON (valid YAML). The Router parses it.
skill
{
  "schema_version": "2.0",
  "id": "skill-system-memory",
  "version": "0.2.0",
  "capabilities": ["memory-search", "memory-store", "memory-health", "memory-types"],
  "effects": ["proc.exec", "db.read", "db.write"],
  "operations": {
    "search": {
      "description": "Search memories by natural language query. Returns ranked results with relevance scores.",
      "input": {
        "query": { "type": "string", "required": true, "description": "Natural language search query" },
        "limit": { "type": "integer", "required": false, "default": 5, "description": "Max results" }
      },
      "output": {
        "description": "Array of memory matches with id, title, content, relevance_score",
        "fields": { "status": "ok | error", "data": "array of {id, title, content, relevance_score}" }
      },
      "entrypoints": {
        "unix": ["bash", "scripts/router_mem.sh", "search", "{query}", "{limit}"],
        "windows": ["powershell.exe", "-NoProfile", "-ExecutionPolicy", "Bypass", "-File", "scripts\\router_mem.ps1", "search", "{query}", "{limit}"]
      }
    },
    "store": {
      "description": "Store a new memory. Auto-deduplicates by content hash.",
      "input": {
        "memory_type": { "type": "string", "required": true, "description": "One of: semantic, episodic, procedural, working" },
        "category": { "type": "string", "required": true, "description": "Category name" },
        "title": { "type": "string", "required": true, "description": "One-line summary" },
        "tags_csv": { "type": "string", "required": true, "description": "Comma-separated tags" },
        "importance": { "type": "integer", "required": true, "description": "1-10 importance score" }
      },
      "output": {
        "description": "Confirmation with stored memory id",
        "fields": { "status": "ok | error", "id": "integer" }
      },
      "entrypoints": {
        "unix": ["bash", "scripts/router_mem.sh", "store", "{memory_type}", "{category}", "{title}", "{tags_csv}", "{importance}"],
        "windows": ["powershell.exe", "-NoProfile", "-ExecutionPolicy", "Bypass", "-File", "scripts\\router_mem.ps1", "store", "{memory_type}", "{category}", "{title}", "{tags_csv}", "{importance}"]
      }
    },
    "health": {
      "description": "Check memory system health: total count, average importance, stale count.",
      "input": {},
      "output": {
        "description": "Health metrics",
        "fields": { "status": "ok | error", "data": "array of {metric, value, status}" }
      },
      "entrypoints": {
        "unix": ["bash", "scripts/router_mem.sh", "health"],
        "windows": ["powershell.exe", "-NoProfile", "-ExecutionPolicy", "Bypass", "-File", "scripts\\router_mem.ps1", "health"]
      }
    },
    "types": {
      "description": "List available memory types and their descriptions.",
      "input": {},
      "output": {
        "description": "Memory type definitions",
        "fields": { "status": "ok | error", "data": "array of {type, lifespan, description}" }
      },
      "entrypoints": {
        "unix": ["bash", "scripts/router_mem.sh", "types"],
        "windows": ["powershell.exe", "-NoProfile", "-ExecutionPolicy", "Bypass", "-File", "scripts\\router_mem.ps1", "types"]
      }
    }
  },
  "stdout_contract": {
    "last_line_json": true
  }
}
Notes:
  • The Router expects each step to print last-line JSON.
  • These Router adapter scripts are separate from
    mem.sh
    /
    mem.ps1
    to avoid breaking existing workflows.
如果你使用执行固定流水线的Router技能,它可以读取本
SKILL.md
中嵌入的清单。
为了可移植性,清单块以YAML格式 fenced,但内容是JSON(有效的YAML)。Router会解析它。
skill
{
  "schema_version": "2.0",
  "id": "skill-system-memory",
  "version": "0.2.0",
  "capabilities": ["memory-search", "memory-store", "memory-health", "memory-types"],
  "effects": ["proc.exec", "db.read", "db.write"],
  "operations": {
    "search": {
      "description": "Search memories by natural language query. Returns ranked results with relevance scores.",
      "input": {
        "query": { "type": "string", "required": true, "description": "Natural language search query" },
        "limit": { "type": "integer", "required": false, "default": 5, "description": "Max results" }
      },
      "output": {
        "description": "Array of memory matches with id, title, content, relevance_score",
        "fields": { "status": "ok | error", "data": "array of {id, title, content, relevance_score}" }
      },
      "entrypoints": {
        "unix": ["bash", "scripts/router_mem.sh", "search", "{query}", "{limit}"],
        "windows": ["powershell.exe", "-NoProfile", "-ExecutionPolicy", "Bypass", "-File", "scripts\\router_mem.ps1", "search", "{query}", "{limit}"]
      }
    },
    "store": {
      "description": "Store a new memory. Auto-deduplicates by content hash.",
      "input": {
        "memory_type": { "type": "string", "required": true, "description": "One of: semantic, episodic, procedural, working" },
        "category": { "type": "string", "required": true, "description": "Category name" },
        "title": { "type": "string", "required": true, "description": "One-line summary" },
        "tags_csv": { "type": "string", "required": true, "description": "Comma-separated tags" },
        "importance": { "type": "integer", "required": true, "description": "1-10 importance score" }
      },
      "output": {
        "description": "Confirmation with stored memory id",
        "fields": { "status": "ok | error", "id": "integer" }
      },
      "entrypoints": {
        "unix": ["bash", "scripts/router_mem.sh", "store", "{memory_type}", "{category}", "{title}", "{tags_csv}", "{importance}"],
        "windows": ["powershell.exe", "-NoProfile", "-ExecutionPolicy", "Bypass", "-File", "scripts\\router_mem.ps1", "store", "{memory_type}", "{category}", "{title}", "{tags_csv}", "{importance}"]
      }
    },
    "health": {
      "description": "Check memory system health: total count, average importance, stale count.",
      "input": {},
      "output": {
        "description": "Health metrics",
        "fields": { "status": "ok | error", "data": "array of {metric, value, status}" }
      },
      "entrypoints": {
        "unix": ["bash", "scripts/router_mem.sh", "health"],
        "windows": ["powershell.exe", "-NoProfile", "-ExecutionPolicy", "Bypass", "-File", "scripts\\router_mem.ps1", "health"]
      }
    },
    "types": {
      "description": "List available memory types and their descriptions.",
      "input": {},
      "output": {
        "description": "Memory type definitions",
        "fields": { "status": "ok | error", "data": "array of {type, lifespan, description}" }
      },
      "entrypoints": {
        "unix": ["bash", "scripts/router_mem.sh", "types"],
        "windows": ["powershell.exe", "-NoProfile", "-ExecutionPolicy", "Bypass", "-File", "scripts\\router_mem.ps1", "types"]
      }
    }
  },
  "stdout_contract": {
    "last_line_json": true
  }
}
注意:
  • Router期望每个步骤输出最后一行JSON
  • 这些Router适配器脚本与
    mem.sh
    /
    mem.ps1
    分离,以避免破坏现有工作流。

Visualize Memories (Markdown export)

可视化内存(Markdown导出)

If querying PostgreSQL is too inconvenient for daily use, you can export memories into markdown files under
./Memory/
(current directory by default):
bash
bash "<skill-dir>/scripts/sync_memory_to_md.sh" --out-dir "./Memory"
Outputs:
  • Memory/Long.md
    (semantic + procedural)
  • Memory/Procedural.md
    (procedural only)
  • Memory/Short.md
    (friction + compaction-daily + procedural highlights)
  • Memory/Episodic.md
    (episodic)
Backups:
  • Backups are stored under
    Memory/.backups/
    to avoid noisy
    git status
    .
  • Use
    --no-backup
    to disable.
The sync script will also create
Memory/.gitignore
if it doesn't exist (ignores
.backups/
and
SYNC_STATUS.txt
).
Long index:
  • Memory/Long.md
    includes an
    Index
    section (top categories + tags) to make the export browsable.
如果日常使用中查询PostgreSQL太不方便,你可以将内存导出到
./Memory/
目录下的Markdown文件(默认当前目录):
bash
bash "<skill-dir>/scripts/sync_memory_to_md.sh" --out-dir "./Memory"
输出:
  • Memory/Long.md
    (语义+流程)
  • Memory/Procedural.md
    (仅流程)
  • Memory/Short.md
    (摩擦+每日压缩+流程要点)
  • Memory/Episodic.md
    (情景)
备份:
  • 备份存储在
    Memory/.backups/
    目录下,避免干扰
    git status
  • 使用
    --no-backup
    参数禁用备份。
同步脚本还会在
Memory/.gitignore
不存在时创建它(忽略
.backups/
SYNC_STATUS.txt
)。
长索引:
  • Memory/Long.md
    包含
    Index
    部分(顶级分类+标签),使导出内容便于浏览。

search_memories(query, types[], categories[], tags[], agent_id, min_importance, limit)

search_memories(query, types[], categories[], tags[], agent_id, min_importance, limit)

Hybrid search: full-text (tsvector) + trigram similarity (pg_trgm) + tag filtering. Accepts plain English queries — no tsquery syntax needed. Relevance scoring:
text_score * decay * recency * importance
.
sql
-- Natural language
SELECT * FROM search_memories('ssh tunnel port conflict', NULL, NULL, NULL, NULL, 7.0, 5);

-- Filter by type + tags
SELECT * FROM search_memories(
    'troubleshooting steps',
    ARRAY['procedural']::memory_type[],
    NULL,
    ARRAY['ssh'],
    NULL, 0.0, 5
);
Returns:
id, memory_type, category, title, content, importance_score, relevance_score, match_type
Where
match_type
is one of:
fulltext
,
trigram_title
,
trigram_content
,
metadata
.
混合搜索:全文检索(tsvector)+ trigram相似度(pg_trgm)+标签过滤。 接受纯英文查询——无需tsquery语法。 相关性评分:
text_score * decay * recency * importance
sql
-- 自然语言查询
SELECT * FROM search_memories('ssh tunnel port conflict', NULL, NULL, NULL, NULL, 7.0, 5);

-- 按类型+标签过滤
SELECT * FROM search_memories(
    'troubleshooting steps',
    ARRAY['procedural']::memory_type[],
    NULL,
    ARRAY['ssh'],
    NULL, 0.0, 5
);
返回:
id, memory_type, category, title, content, importance_score, relevance_score, match_type
其中
match_type
为以下之一:
fulltext
,
trigram_title
,
trigram_content
,
metadata

memory_health_check()

memory_health_check()

Returns:
metric | value | status
for
total_memories
,
avg_importance
,
stale_count
.
返回:
metric | value | status
,包含
total_memories
,
avg_importance
,
stale_count

apply_memory_decay()

apply_memory_decay()

Decays episodic memories by
0.9999^days_since_access
. Run daily.
0.9999^days_since_access
衰减情景内存。每日运行。

prune_stale_memories(age_days, max_importance, max_access_count)

prune_stale_memories(age_days, max_importance, max_access_count)

Soft-deletes old episodic memories below thresholds. Default: 180 days, importance <= 3, never accessed.
软删除低于阈值的旧情景内存。默认值:180天,重要性≤3,从未访问。

Agent Workflow

Agent工作流

Before a task

任务开始前

sql
SELECT id, title, content, relevance_score
FROM search_memories('keywords from user request', NULL, NULL, NULL, NULL, 5.0, 5);
If relevant memories found, reference them: "Based on past experience (memory #1)..."
sql
SELECT id, title, content, relevance_score
FROM search_memories('用户请求中的关键词', NULL, NULL, NULL, NULL, 5.0, 5);
如果找到相关内存,引用它们:"基于过往经验(内存#1)..."

After solving a problem

问题解决后

sql
SELECT store_memory(
    'semantic',
    'category-name',
    ARRAY['tag1', 'tag2', 'tag3'],
    'One-line problem summary',
    'Detailed problem + solution',
    '{"os": "...", "tools": [...]}',
    'agent-name',
    NULL,
    8.0
);
sql
SELECT store_memory(
    'semantic',
    '分类名称',
    ARRAY['标签1', '标签2', '标签3'],
    '一行问题摘要',
    '详细问题+解决方案',
    '{"os": "...", "tools": [...]}',
    'agent名称',
    NULL,
    8.0
);

When delegating to subagents

委托给子Agent时

Include in prompt:
MUST DO FIRST:
  Search agent_memories: SELECT * FROM search_memories('relevant keywords', NULL, NULL, NULL, NULL, 5.0, 5);

MUST DO AFTER:
  If you solved something new, store it with store_memory(...)
在提示词中包含:
必须首先执行:
  搜索agent_memories:SELECT * FROM search_memories('相关关键词', NULL, NULL, NULL, NULL, 5.0, 5);

必须最后执行:
  如果你解决了新问题,请使用store_memory(...)存储它

Task Memory Layer (optional)

任务内存层(可选)

This skill also ships a minimal task/issue layer inspired by Beads: graph semantics + deterministic "ready work" queries.
Objects:
  • agent_tasks
    : tasks (status, priority, assignee)
  • task_links
    : typed links (
    blocks
    ,
    parent_child
    ,
    related
    , etc.)
  • blocked_tasks_cache
    : materialized cache to make ready queries fast
  • task_memory_links
    : link tasks to memories (
    agent_memories
    ) for outcomes/notes
Create tasks:
sql
INSERT INTO agent_tasks(title, description, created_by, priority)
VALUES ('Install pgvector', 'Windows build + enable extension', 'user', 1);
Add dependencies:
sql
-- Task 1 blocks task 2
INSERT INTO task_links(from_task_id, to_task_id, link_type)
VALUES (1, 2, 'blocks');

-- Task 2 is parent of task 3 (used for transitive blocking)
INSERT INTO task_links(from_task_id, to_task_id, link_type)
VALUES (2, 3, 'parent_child');
Rebuild blocked cache (usually auto via triggers):
sql
SELECT rebuild_blocked_tasks_cache();
Ready work query:
sql
SELECT id, title, priority
FROM agent_tasks t
WHERE t.deleted_at IS NULL
  AND t.status IN ('open','in_progress')
  AND NOT EXISTS (SELECT 1 FROM blocked_tasks_cache b WHERE b.task_id = t.id)
ORDER BY priority ASC, updated_at ASC
LIMIT 50;
Claim a task (atomic):
sql
SELECT claim_task(2, 'agent-1');
Link a task to a memory:
sql
INSERT INTO task_memory_links(task_id, memory_id, link_type)
VALUES (2, 123, 'outcome');
Optional add-on:
conditional_blocks
(not implemented yet)
  • This is intentionally deferred until the core workflow feels solid.
  • If you need it now, store a condition in
    task_links.metadata
    (e.g.,
    { "os": "windows" }
    ) and treat it as documentation.
本技能还附带一个受Beads启发的极简任务/问题层:图语义+确定性"就绪工作"查询。
对象:
  • agent_tasks
    :任务(状态、优先级、经办人)
  • task_links
    :类型化链接(
    blocks
    ,
    parent_child
    ,
    related
    等)
  • blocked_tasks_cache
    :物化缓存,使就绪查询更快
  • task_memory_links
    :将任务与内存(
    agent_memories
    )关联,用于结果/备注
创建任务:
sql
INSERT INTO agent_tasks(title, description, created_by, priority)
VALUES ('Install pgvector', 'Windows build + enable extension', 'user', 1);
添加依赖:
sql
-- 任务1阻塞任务2
INSERT INTO task_links(from_task_id, to_task_id, link_type)
VALUES (1, 2, 'blocks');

-- 任务2是任务3的父任务(用于传递阻塞)
INSERT INTO task_links(from_task_id, to_task_id, link_type)
VALUES (2, 3, 'parent_child');
重建阻塞缓存(通常通过触发器自动完成):
sql
SELECT rebuild_blocked_tasks_cache();
就绪工作查询:
sql
SELECT id, title, priority
FROM agent_tasks t
WHERE t.deleted_at IS NULL
  AND t.status IN ('open','in_progress')
  AND NOT EXISTS (SELECT 1 FROM blocked_tasks_cache b WHERE b.task_id = t.id)
ORDER BY priority ASC, updated_at ASC
LIMIT 50;
认领任务(原子操作):
sql
SELECT claim_task(2, 'agent-1');
将任务与内存关联:
sql
INSERT INTO task_memory_links(task_id, memory_id, link_type)
VALUES (2, 123, 'outcome');
可选附加功能:
conditional_blocks
(尚未实现)
  • 此功能故意推迟,直到核心工作流稳定。
  • 如果你现在需要它,可以在
    task_links.metadata
    中存储条件(例如
    { "os": "windows" }
    ),并将其视为文档。

Wrapper scripts (recommended)

包装脚本(推荐)

To avoid re-typing SQL, use the wrapper scripts shipped with this skill:
Windows:
powershell
powershell.exe -NoProfile -ExecutionPolicy Bypass -File "scripts\tasks.ps1" ready 50
powershell.exe -NoProfile -ExecutionPolicy Bypass -File "scripts\tasks.ps1" create "Install pgvector" 1
powershell.exe -NoProfile -ExecutionPolicy Bypass -File "scripts\tasks.ps1" claim 2 agent-1
Linux/macOS:
bash
bash "scripts/tasks.sh" ready 50
bash "scripts/tasks.sh" create "Install pgvector" 1
bash "scripts/tasks.sh" claim 2 agent-1
为避免重复输入SQL,请使用本技能附带的包装脚本:
Windows:
powershell
powershell.exe -NoProfile -ExecutionPolicy Bypass -File "scripts\tasks.ps1" ready 50
powershell.exe -NoProfile -ExecutionPolicy Bypass -File "scripts\tasks.ps1" create "Install pgvector" 1
powershell.exe -NoProfile -ExecutionPolicy Bypass -File "scripts\tasks.ps1" claim 2 agent-1
Linux/macOS:
bash
bash "scripts/tasks.sh" ready 50
bash "scripts/tasks.sh" create "Install pgvector" 1
bash "scripts/tasks.sh" claim 2 agent-1

Compaction Log (high value)

压缩日志(高价值)

Compaction can delete context. Treat every compaction as an important event and record it.
If you're using OpenCode, prefer the OpenCode plugin route for automatic compaction logging.
压缩可能会丢失上下文。将每次压缩视为重要事件并记录下来。
如果你使用OpenCode,优先通过OpenCode插件实现自动压缩日志。

OpenCode plugin (experimental.session.compacting)

OpenCode插件(experimental.session.compacting)

  1. Copy
    plugins/skill-system-memory.js
    to
    ~/.config/opencode/plugins/
  2. Restart OpenCode
It writes local compaction events to:
  • ~/.config/opencode/skill-system-memory/compaction-events.jsonl
And will also attempt a best-effort Postgres
store_memory(...)
write (requires pgpass).
  1. plugins/skill-system-memory.js
    复制到
    ~/.config/opencode/plugins/
  2. 重启OpenCode
它会将本地压缩事件写入:
  • ~/.config/opencode/skill-system-memory/compaction-events.jsonl
并且还会尝试尽最大努力写入Postgres的
store_memory(...)
(需要pgpass)。

Verify

验证

sql
SELECT id, title, relevance_score
FROM search_memories('compaction', NULL, NULL, NULL, NULL, 0, 10);
If nothing is inserted, set up
.pgpass
/
pgpass.conf
so
psql
can authenticate without prompting.
sql
SELECT id, title, relevance_score
FROM search_memories('compaction', NULL, NULL, NULL, NULL, 0, 10);
如果没有插入任何内容,请设置
.pgpass
/
pgpass.conf
,使
psql
无需提示即可进行身份验证。

Daily Compaction Consolidation

每日压缩合并

Raw compaction events are noisy. Run a daily consolidation job that turns many compaction events into 1 daily memory.
The consolidation scripts default to the OpenCode plugin event log path and will fall back to Claude Code paths if needed.
  • OpenCode events:
    ~/.config/opencode/skill-system-memory-postgres/compaction-events.jsonl
  • Output directory:
    ~/.config/opencode/skill-system-memory-postgres/compaction-daily/
Windows run (manual):
powershell
powershell.exe -NoProfile -ExecutionPolicy Bypass -File "scripts\consolidate-compactions.ps1"
Linux/macOS run (manual):
bash
bash "scripts/consolidate-compactions.sh"
Scheduling:
  • Windows Task Scheduler: create a daily task that runs the PowerShell command above
  • Linux cron example:
bash
undefined
原始压缩事件比较杂乱。运行每日合并任务,将多个压缩事件转换为1条每日内存。
合并脚本默认使用OpenCode插件事件日志路径,如果需要会回退到Claude Code路径。
  • OpenCode事件:
    ~/.config/opencode/skill-system-memory-postgres/compaction-events.jsonl
  • 输出目录:
    ~/.config/opencode/skill-system-memory-postgres/compaction-daily/
Windows手动运行:
powershell
powershell.exe -NoProfile -ExecutionPolicy Bypass -File "scripts\consolidate-compactions.ps1"
Linux/macOS手动运行:
bash
bash "scripts/consolidate-compactions.sh"
调度:
  • Windows任务计划程序:创建每日任务,运行上述PowerShell命令
  • Linux cron示例:
bash
undefined

every day at 02:10 UTC

每天UTC时间02:10运行

10 2 * * * bash "<skill-dir>/scripts/consolidate-compactions.sh" >/dev/null 2>&1
undefined
10 2 * * * bash "<skill-dir>/scripts/consolidate-compactions.sh" >/dev/null 2>&1
undefined

Appendix: Claude Code compatibility (optional)

附录:Claude Code兼容性(可选)

This repository also includes Claude Code hook scripts under
hooks/
. They are not required for OpenCode usage.
本仓库还包含
hooks/
目录下的Claude Code钩子脚本。OpenCode使用无需这些脚本。

Friction Log (turn pain into tooling)

摩擦日志(将痛点转化为工具)

Whenever something is annoying, brittle, or fails:
  1. Store an
    episodic
    memory with category
    friction
    and tags for the tool/OS/error.
  2. If it repeats (2+ times), promote it to
    procedural
    memory (importance >= 7) with a checklist.
  3. Update this skill doc when the fix becomes a stable rule/workflow (so every agent learns it).
每当遇到恼人、脆弱或失败的情况时:
  1. 存储一条
    episodic
    内存,分类为
    friction
    ,并添加工具/操作系统/错误相关的标签。
  2. 如果重复发生(2次以上),将其升级为
    procedural
    内存(重要性≥7),并附带检查清单。
  3. 当修复成为稳定规则/工作流时,更新本技能文档(以便所有Agent都能学习)。

Schema Overview

架构概述

agent_memories
— Main table. Full-text search, trigram indexes, JSONB metadata, soft-delete.
memory_links
— Graph relationships (references, supersedes, contradicts).
working_memory
— Ephemeral session context with auto-expire.
Key columns:
memory_type
,
category
,
tags[]
,
title
,
content
,
content_hash
(auto),
metadata
(JSONB),
importance_score
,
access_count
,
relevance_decay
,
search_vector
(auto).
agent_memories
— 主表。支持全文检索、trigram索引、JSONB元数据、软删除。
memory_links
— 图关系(引用、取代、矛盾)。
working_memory
— 临时会话上下文,自动过期。
关键列:
memory_type
,
category
,
tags[]
,
title
,
content
,
content_hash
(自动),
metadata
(JSONB),
importance_score
,
access_count
,
relevance_decay
,
search_vector
(自动)。

Anti-Patterns

反模式

Don'tDo Instead
Store everythingOnly store non-obvious solutions
Skip tagsTag comprehensively: tech, error codes, platform
Use
to_tsquery
directly
search_memories()
handles this via
plainto_tsquery
One type for all dataUse correct memory_type per content
Forget importance ratingRate honestly: 9-10 battle-tested, 5-6 partial
不要做应该做
存储所有内容仅存储非显而易见的解决方案
跳过标签全面添加标签:技术、错误代码、平台
直接使用
to_tsquery
search_memories()
通过
plainto_tsquery
处理
所有数据使用一种类型根据内容使用正确的memory_type
忘记重要性评分如实评分:9-10为经过实战检验,5-6为部分有效

Sharp Edges

注意事项

IssueSeverityMitigation
Chunks lose contextCriticalStore full problem+solution as one unit
Old tech memoriesHigh
apply_memory_decay()
daily; prune stale
Duplicate memoriesMedium
store_memory()
auto-deduplicates by content_hash
No vector searchInfopg_trgm provides fuzzy matching; pgvector can be added later
问题严重程度缓解措施
块丢失上下文严重将完整的问题+解决方案作为一个单元存储
过时技术内存每日运行
apply_memory_decay()
;清理过时内容
重复内存
store_memory()
通过content_hash自动去重
无向量搜索信息pg_trgm提供模糊匹配;可稍后添加pgvector

Cross-Platform Notes

跨平台说明

  • PostgreSQL 14-18 supported (no partitioning, no GENERATED ALWAYS)
  • pg_trgm is the only required extension (ships with all PG distributions)
  • Linux:
    psql -U postgres -d agent_memory -f init.sql
  • Windows: Use full path to psql.exe or add PG bin to PATH
  • MCP postgres_query: Works for read operations; DDL requires psql
  • 支持PostgreSQL 14-18(无分区,无GENERATED ALWAYS)
  • pg_trgm是唯一必需的扩展(随所有PG发行版提供)
  • Linux
    psql -U postgres -d agent_memory -f init.sql
  • Windows:使用psql.exe的完整路径或将PG bin目录添加到PATH
  • MCP postgres_query:适用于读取操作;DDL需要psql

Maintenance

维护

sql
SELECT apply_memory_decay();                         -- daily
SELECT prune_stale_memories(180, 3.0, 0);            -- monthly
DELETE FROM working_memory WHERE expires_at < NOW();  -- daily
SELECT * FROM memory_health_check();                  -- anytime
sql
SELECT apply_memory_decay();                         -- 每日
SELECT prune_stale_memories(180, 3.0, 0);            -- 每月
DELETE FROM working_memory WHERE expires_at < NOW();  -- 每日
SELECT * FROM memory_health_check();                  -- 随时

Optional: pgvector Semantic Search

可选:pgvector语义搜索

If pgvector is installed on your PostgreSQL server,
init.sql
will:
  • create extension
    vector
    (non-fatal if missing)
  • add
    agent_memories.embedding vector
    (variable dimension)
  • create
    search_memories_vector(p_embedding, p_embedding_dim, ...)
Notes:
  • This does NOT generate embeddings. You must populate
    agent_memories.embedding
    yourself.
  • Once embeddings exist, you can do nearest-neighbor search:
sql
-- p_embedding is a pgvector literal; pass it from your app.
-- Optionally filter by dimension (recommended when using multiple models).
SELECT id, title, similarity
FROM search_memories_vector('[0.01, 0.02, ...]'::vector, 768, NULL, NULL, NULL, NULL, 0.0, 10);
Note: variable-dimension vectors cannot be indexed with pgvector indexes. This is a tradeoff to support local models with different embedding sizes.
If pgvector is not installed, everything else still works (fts + pg_trgm).
如果你的PostgreSQL服务器安装了pgvector,
init.sql
将:
  • 创建扩展
    vector
    (如果缺失不会导致致命错误)
  • 添加
    agent_memories.embedding vector
    (可变维度)
  • 创建
    search_memories_vector(p_embedding, p_embedding_dim, ...)
注意:
  • 本技能不会生成嵌入向量。你必须自行填充
    agent_memories.embedding
  • 嵌入向量存在后,你可以进行最近邻搜索:
sql
-- p_embedding是pgvector字面量;从你的应用程序传递。
-- 可选按维度过滤(当使用多个模型时推荐)。
SELECT id, title, similarity
FROM search_memories_vector('[0.01, 0.02, ...]'::vector, 768, NULL, NULL, NULL, NULL, 0.0, 10);
注意:可变维度向量无法使用pgvector索引。这是为了支持不同嵌入大小的本地模型而做出的权衡。
如果未安装pgvector,其他功能仍可正常工作(全文检索+pg_trgm)。

Embedding Ingestion Pipeline

嵌入向量摄入流水线

pgvector search only works after you populate
agent_memories.embedding
.
This skill ships ingestion scripts (relative paths). Run from the skill directory:
  • scripts/ingest-embeddings.ps1
  • scripts/ingest-embeddings.sh
They:
  • find memories with
    embedding IS NULL
  • call an OpenAI-compatible embeddings endpoint (including Ollama)
  • write vectors into
    agent_memories.embedding vector
Requirements:
  • pgvector installed +
    init.sql
    applied (so
    agent_memories.embedding
    exists)
  • .pgpass
    /
    pgpass.conf
    configured (so
    psql -w
    can write without prompting)
  • env vars for embedding API:
    • EMBEDDING_PROVIDER
      (
      ollama
      or
      openai
      ; default
      openai
      )
    • EMBEDDING_API_KEY
      (required for
      openai
      ; optional for
      ollama
      )
    • EMBEDDING_API_URL
      (default depends on provider)
    • EMBEDDING_MODEL
      (default depends on provider)
    • EMBEDDING_DIMENSIONS
      (optional; forwarded to the embeddings endpoint when supported)
Windows example:
powershell
$env:EMBEDDING_PROVIDER = "ollama"
$env:EMBEDDING_MODEL = "nomic-embed-text"
powershell.exe -NoProfile -ExecutionPolicy Bypass -File "scripts\ingest-embeddings.ps1" -Limit 25
Linux/macOS example:
bash
export EMBEDDING_API_KEY=...
export EMBEDDING_MODEL=text-embedding-3-small
bash "scripts/ingest-embeddings.sh"
Scheduling:
  • run daily (or hourly) after you add new memories
  • keep
    Limit
    small until you trust it
Robustness note:
  • On Windows, very long SQL strings can be fragile when passed via
    psql -c
    . The ingestion script writes per-row updates to a temporary
    .sql
    file and runs
    psql -f
    to avoid command-line length/quoting edge cases.
只有填充
agent_memories.embedding
后,pgvector搜索才能工作。
本技能附带摄入脚本(相对路径)。从技能目录运行:
  • scripts/ingest-embeddings.ps1
  • scripts/ingest-embeddings.sh
它们:
  • 查找
    embedding IS NULL
    的内存
  • 调用兼容OpenAI的嵌入向量端点(包括Ollama)
  • 将向量写入
    agent_memories.embedding vector
要求:
  • 已安装pgvector并应用
    init.sql
    (以便
    agent_memories.embedding
    存在)
  • 已配置
    .pgpass
    /
    pgpass.conf
    (以便
    psql -w
    无需提示即可写入)
  • 嵌入API的环境变量:
    • EMBEDDING_PROVIDER
      ollama
      openai
      ;默认
      openai
    • EMBEDDING_API_KEY
      openai
      必需;
      ollama
      可选)
    • EMBEDDING_API_URL
      (默认值取决于提供商)
    • EMBEDDING_MODEL
      (默认值取决于提供商)
    • EMBEDDING_DIMENSIONS
      (可选;支持时转发到嵌入向量端点)
Windows示例:
powershell
$env:EMBEDDING_PROVIDER = "ollama"
$env:EMBEDDING_MODEL = "nomic-embed-text"
powershell.exe -NoProfile -ExecutionPolicy Bypass -File "scripts\ingest-embeddings.ps1" -Limit 25
Linux/macOS示例:
bash
export EMBEDDING_API_KEY=...
export EMBEDDING_MODEL=text-embedding-3-small
bash "scripts/ingest-embeddings.sh"
调度:
  • 添加新内存后每日(或每小时)运行
  • 在信任它之前,将
    Limit
    设置为较小值
健壮性说明:
  • 在Windows上,非常长的SQL字符串通过
    psql -c
    传递时可能不稳定。摄入脚本会将每行更新写入临时
    .sql
    文件,并运行
    psql -f
    以避免命令行长度/引用的边缘情况。

Related Skills

相关技能

systematic-debugging
,
postgres-pro
,
postgresql-table-design
systematic-debugging
,
postgres-pro
,
postgresql-table-design