agent-memory-systems-postgres

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

Agent Memory Systems (PostgreSQL)

AI Agent内存系统(基于PostgreSQL)

Persistent shared memory for all AI agents. PostgreSQL 14+ on Linux or Windows. Memory failures look like intelligence failures — this skill ensures the right memory is retrieved at the right time.
为所有AI Agent提供持久化共享内存支持,兼容Linux或Windows系统上的PostgreSQL 14及以上版本。 内存故障会表现为智能失效——本技能确保在正确的时间检索到正确的内存。

Quick Start

快速开始

Database
agent_memory
and all functions are created by
init.sql
in this skill directory.
bash
undefined
数据库
agent_memory
及所有函数由本技能目录下的
init.sql
创建。
bash
undefined

Linux

Linux系统

psql -U postgres -c "CREATE DATABASE agent_memory;" psql -U postgres -d agent_memory -f init.sql
psql -U postgres -c "CREATE DATABASE agent_memory;" psql -U postgres -d agent_memory -f init.sql

Windows (adjust path to your psql.exe)

Windows系统(请调整psql.exe的路径)

& "C:\Program Files\PostgreSQL\18\bin\psql.exe" -U postgres -c "CREATE DATABASE agent_memory;" & "C:\Program Files\PostgreSQL\18\bin\psql.exe" -U postgres -d agent_memory -f init.sql

Verify: `SELECT * FROM memory_health_check();`
& "C:\Program Files\PostgreSQL\18\bin\psql.exe" -U postgres -c "CREATE DATABASE agent_memory;" & "C:\Program Files\PostgreSQL\18\bin\psql.exe" -U postgres -d agent_memory -f init.sql

验证:`SELECT * FROM memory_health_check();`

Pure Skill Mode (default)

纯技能模式(默认)

This skill works without installing any plugin. In pure skill mode:
  • you manually run scripts when you want (progressive disclosure)
  • no global OpenCode config is modified automatically
本技能无需安装任何插件即可运行。在纯技能模式下:
  • 你可以手动按需运行脚本(渐进式披露)
  • 不会自动修改全局OpenCode配置

Optional bootstrap (asks + records choices + tries to install)

可选引导流程(询问并记录选择,尝试安装组件)

Notes:
  • Interactive mode defaults to NOT installing heavy optional components.
  • Use
    -InstallAll
    /
    --install-all
    only when you're ready to install everything.
Run the bootstrap script to choose optional components (pgpass, local embeddings, pgvector) and record decisions.
Windows:
powershell
undefined
注意事项:
  • 交互模式默认不会安装大型可选组件。
  • 仅当你准备好安装全部组件时,才使用
    -InstallAll
    /
    --install-all
    参数。
运行引导脚本以选择可选组件(pgpass、本地嵌入、pgvector)并记录你的选择。
Windows系统:
powershell
undefined

run from the skill directory

从技能目录运行

powershell.exe -NoProfile -ExecutionPolicy Bypass -File "scripts\bootstrap.ps1"

Linux/macOS:

```bash
powershell.exe -NoProfile -ExecutionPolicy Bypass -File "scripts\bootstrap.ps1"

Linux/macOS系统:

```bash

run from the skill directory

从技能目录运行

bash "scripts/bootstrap.sh"

The selection record is stored at:

- `~/.config/opencode/agent-memory-systems-postgres/setup.json`

Agent rule:

- If this file does not exist, ask the user if they want to enable optional components.
- Recommended: run bootstrap with all options enabled (then fix any failures it reports).

On Windows, pgvector installation follows the official pgvector instructions (Visual Studio C++ + `nmake /F Makefile.win`). The bootstrap will attempt to install prerequisites via `winget`.
bash "scripts/bootstrap.sh"

选择记录将存储在:

- `~/.config/opencode/agent-memory-systems-postgres/setup.json`

Agent规则:

- 若该文件不存在,需询问用户是否要启用可选组件。
- 推荐:启用所有选项运行引导脚本(然后修复报告的任何故障)。

在Windows系统上,pgvector的安装遵循官方说明(需要Visual Studio C++ + `nmake /F Makefile.win`)。引导脚本会尝试通过`winget`安装依赖项。

Optional automation: compaction logging (OpenCode plugin)

可选自动化:压缩日志(OpenCode插件)

If you want automatic compaction logging, install the OpenCode plugin template shipped with this skill.
  1. Copy
    plugins/agent-memory-systems-postgres.js
    to
    ~/.config/opencode/plugins/
  2. Restart OpenCode
若要实现自动压缩日志,请安装本技能附带的OpenCode插件模板。
  1. plugins/agent-memory-systems-postgres.js
    复制到
    ~/.config/opencode/plugins/
  2. 重启OpenCode

Credentials (psql)

凭据管理(psql)

Do NOT hardcode passwords in scripts, skill docs, or config files.
Recommended options for non-interactive
psql
:
  • .pgpass
    /
    pgpass.conf
    (recommended)
    • Linux/macOS:
      ~/.pgpass
      (must be
      chmod 0600 ~/.pgpass
      or libpq will ignore it)
    • Windows:
      %APPDATA%\postgresql\pgpass.conf
      (example:
      C:\Users\<you>\AppData\Roaming\postgresql\pgpass.conf
      )
    • Format:
      hostname:port:database:username:password
    • Docs: https://www.postgresql.org/docs/current/libpq-pgpass.html
  • PGPASSFILE
    (optional override): point to a custom location for the password file
  • PGPASSWORD
    (not recommended): only for quick local testing; environment variables can leak on some systems
Tip: set connection defaults once (per shell) to shorten commands:
bash
export PGHOST=localhost
export PGPORT=5432
export PGDATABASE=agent_memory
export PGUSER=postgres
请勿在脚本、技能文档或配置文件中硬编码密码。
非交互式
psql
的推荐选项:
  • .pgpass
    /
    pgpass.conf
    (推荐)
    • Linux/macOS:
      ~/.pgpass
      (必须设置
      chmod 0600 ~/.pgpass
      ,否则libpq会忽略它)
    • Windows:
      %APPDATA%\\postgresql\\pgpass.conf
      (示例:
      C:\\Users\\<你>\\AppData\\Roaming\\postgresql\\pgpass.conf
    • 格式:
      hostname:port:database:username:password
    • 文档:https://www.postgresql.org/docs/current/libpq-pgpass.html
  • PGPASSFILE
    (可选覆盖):指定密码文件的自定义路径
  • PGPASSWORD
    (不推荐):仅用于快速本地测试;环境变量在某些系统中可能泄露
小贴士:一次性设置连接默认值(每个会话)以缩短命令:
bash
export PGHOST=localhost
export PGPORT=5432
export PGDATABASE=agent_memory
export PGUSER=postgres

One-time setup helper scripts

一次性设置辅助脚本

This skill ships helper scripts (relative paths):
  • scripts/setup-pgpass.ps1
  • scripts/setup-pgpass.sh
OpenCode usage: run them from the skill directory.
Windows run:
powershell
powershell.exe -NoProfile -ExecutionPolicy Bypass -File "scripts\setup-pgpass.ps1"
Linux/macOS run:
bash
bash "scripts/setup-pgpass.sh"
本技能附带辅助脚本(相对路径):
  • scripts/setup-pgpass.ps1
  • scripts/setup-pgpass.sh
OpenCode使用方式:从技能目录运行它们。
Windows运行:
powershell
powershell.exe -NoProfile -ExecutionPolicy Bypass -File "scripts\\setup-pgpass.ps1"
Linux/macOS运行:
bash
bash "scripts/setup-pgpass.sh"

Memory Types

内存类型

TypeLifespanUse When
working
24h auto-expireCurrent conversation context (requires
session_id
)
episodic
Permanent + decayProblem-solving experiences, debugging sessions
semantic
PermanentExtracted facts, knowledge, patterns
procedural
PermanentStep-by-step procedures, checklists (importance >= 7)
类型生命周期使用场景
working
24小时自动过期当前对话上下文(需指定
session_id
episodic
永久存储+衰减机制问题解决经验、调试会话
semantic
永久存储提取的事实、知识、模式
procedural
永久存储分步流程、检查清单(重要性≥7)

Core Functions

核心函数

store_memory(type, category, tags[], title, content, metadata, agent_id, session_id, importance)

store_memory(type, category, tags[], title, content, metadata, agent_id, session_id, importance)

Auto-deduplicates by content hash. Duplicate inserts bump
access_count
and
importance_score
.
sql
SELECT store_memory(
    'semantic',
    'windows-networking',
    ARRAY['ssh', 'tunnel', 'port-conflict'],
    'SSH Tunnel Port Conflict Resolution',
    'Fix: 1) taskkill /F /IM ssh.exe  2) Use processId not pid  3) Wait 3s',
    '{"os": "Windows 11"}',
    'sisyphus',
    NULL,
    9.0
);
通过内容哈希自动去重。重复插入会增加
access_count
importance_score
sql
SELECT store_memory(
    'semantic',
    'windows-networking',
    ARRAY['ssh', 'tunnel', 'port-conflict'],
    'SSH Tunnel Port Conflict Resolution',
    'Fix: 1) taskkill /F /IM ssh.exe  2) Use processId not pid  3) Wait 3s',
    '{"os": "Windows 11"}',
    'sisyphus',
    NULL,
    9.0
);

search_memories(query, types[], categories[], tags[], agent_id, min_importance, limit)

search_memories(query, types[], categories[], tags[], agent_id, min_importance, limit)

Hybrid search: full-text (tsvector) + trigram similarity (pg_trgm) + tag filtering. Accepts plain English queries — no tsquery syntax needed. Relevance scoring:
text_score * decay * recency * importance
.
sql
-- Natural language
SELECT * FROM search_memories('ssh tunnel port conflict', NULL, NULL, NULL, NULL, 7.0, 5);

-- Filter by type + tags
SELECT * FROM search_memories(
    'troubleshooting steps',
    ARRAY['procedural']::memory_type[],
    NULL,
    ARRAY['ssh'],
    NULL, 0.0, 5
);
Returns:
id, memory_type, category, title, content, importance_score, relevance_score, match_type
Where
match_type
is one of:
fulltext
,
trigram_title
,
trigram_content
,
metadata
.
混合搜索:全文检索(tsvector)+ trigram相似度(pg_trgm)+标签过滤。 接受自然英文查询——无需使用tsquery语法。 相关性评分:
text_score * decay * recency * importance
sql
-- 自然语言查询
SELECT * FROM search_memories('ssh tunnel port conflict', NULL, NULL, NULL, NULL, 7.0, 5);

-- 按类型+标签过滤
SELECT * FROM search_memories(
    'troubleshooting steps',
    ARRAY['procedural']::memory_type[],
    NULL,
    ARRAY['ssh'],
    NULL, 0.0, 5
);
返回字段:
id, memory_type, category, title, content, importance_score, relevance_score, match_type
其中
match_type
包括:
fulltext
,
trigram_title
,
trigram_content
,
metadata

memory_health_check()

memory_health_check()

Returns:
metric | value | status
for
total_memories
,
avg_importance
,
stale_count
.
返回:
metric | value | status
,包含
total_memories
,
avg_importance
,
stale_count

apply_memory_decay()

apply_memory_decay()

Decays episodic memories by
0.9999^days_since_access
. Run daily.
0.9999^days_since_access
衰减情景记忆。建议每日运行。

prune_stale_memories(age_days, max_importance, max_access_count)

prune_stale_memories(age_days, max_importance, max_access_count)

Soft-deletes old episodic memories below thresholds. Default: 180 days, importance <= 3, never accessed.
软删除低于阈值的旧情景记忆。默认值:180天,重要性≤3,从未访问过。

Agent Workflow

Agent工作流

Before a task

任务开始前

sql
SELECT id, title, content, relevance_score
FROM search_memories('keywords from user request', NULL, NULL, NULL, NULL, 5.0, 5);
If relevant memories found, reference them: "Based on past experience (memory #1)..."
sql
SELECT id, title, content, relevance_score
FROM search_memories('用户请求中的关键词', NULL, NULL, NULL, NULL, 5.0, 5);
若找到相关内存,可引用:“基于过往经验(内存#1)...”

After solving a problem

问题解决后

sql
SELECT store_memory(
    'semantic',
    'category-name',
    ARRAY['tag1', 'tag2', 'tag3'],
    'One-line problem summary',
    'Detailed problem + solution',
    '{"os": "...", "tools": [...]}',
    'agent-name',
    NULL,
    8.0
);
sql
SELECT store_memory(
    'semantic',
    '分类名称',
    ARRAY['标签1', '标签2', '标签3'],
    '一行问题摘要',
    '详细问题+解决方案',
    '{"os": "...", "tools": [...]}',
    'agent名称',
    NULL,
    8.0
);

When delegating to subagents

委托给子Agent时

Include in prompt:
MUST DO FIRST:
  Search agent_memories: SELECT * FROM search_memories('relevant keywords', NULL, NULL, NULL, NULL, 5.0, 5);

MUST DO AFTER:
  If you solved something new, store it with store_memory(...)
在提示词中包含:
必须首先执行:
  搜索agent_memories:SELECT * FROM search_memories('相关关键词', NULL, NULL, NULL, NULL, 5.0, 5);

必须最后执行:
  若解决了新问题,请使用store_memory(...)存储

Task Memory Layer (optional)

任务内存层(可选)

This skill also ships a minimal task/issue layer inspired by Beads: graph semantics + deterministic "ready work" queries.
Objects:
  • agent_tasks
    : tasks (status, priority, assignee)
  • task_links
    : typed links (
    blocks
    ,
    parent_child
    ,
    related
    , etc.)
  • blocked_tasks_cache
    : materialized cache to make ready queries fast
  • task_memory_links
    : link tasks to memories (
    agent_memories
    ) for outcomes/notes
Create tasks:
sql
INSERT INTO agent_tasks(title, description, created_by, priority)
VALUES ('Install pgvector', 'Windows build + enable extension', 'user', 1);
Add dependencies:
sql
-- Task 1 blocks task 2
INSERT INTO task_links(from_task_id, to_task_id, link_type)
VALUES (1, 2, 'blocks');

-- Task 2 is parent of task 3 (used for transitive blocking)
INSERT INTO task_links(from_task_id, to_task_id, link_type)
VALUES (2, 3, 'parent_child');
Rebuild blocked cache (usually auto via triggers):
sql
SELECT rebuild_blocked_tasks_cache();
Ready work query:
sql
SELECT id, title, priority
FROM agent_tasks t
WHERE t.deleted_at IS NULL
  AND t.status IN ('open','in_progress')
  AND NOT EXISTS (SELECT 1 FROM blocked_tasks_cache b WHERE b.task_id = t.id)
ORDER BY priority ASC, updated_at ASC
LIMIT 50;
Claim a task (atomic):
sql
SELECT claim_task(2, 'agent-1');
Link a task to a memory:
sql
INSERT INTO task_memory_links(task_id, memory_id, link_type)
VALUES (2, 123, 'outcome');
Optional add-on:
conditional_blocks
(not implemented yet)
  • This is intentionally deferred until the core workflow feels solid.
  • If you need it now, store a condition in
    task_links.metadata
    (e.g.,
    { "os": "windows" }
    ) and treat it as documentation.
本技能还附带一个受Beads启发的极简任务/问题层:图语义+确定性“就绪工作”查询。
对象:
  • agent_tasks
    :任务(状态、优先级、经办人)
  • task_links
    :类型化链接(
    blocks
    ,
    parent_child
    ,
    related
    等)
  • blocked_tasks_cache
    :物化缓存,加速就绪查询
  • task_memory_links
    :将任务与内存(
    agent_memories
    )关联,用于记录结果/备注
创建任务:
sql
INSERT INTO agent_tasks(title, description, created_by, priority)
VALUES ('Install pgvector', 'Windows build + enable extension', 'user', 1);
添加依赖:
sql
-- 任务1阻塞任务2
INSERT INTO task_links(from_task_id, to_task_id, link_type)
VALUES (1, 2, 'blocks');

-- 任务2是任务3的父任务(用于传递阻塞关系)
INSERT INTO task_links(from_task_id, to_task_id, link_type)
VALUES (2, 3, 'parent_child');
重建阻塞缓存(通常由触发器自动完成):
sql
SELECT rebuild_blocked_tasks_cache();
就绪工作查询:
sql
SELECT id, title, priority
FROM agent_tasks t
WHERE t.deleted_at IS NULL
  AND t.status IN ('open','in_progress')
  AND NOT EXISTS (SELECT 1 FROM blocked_tasks_cache b WHERE b.task_id = t.id)
ORDER BY priority ASC, updated_at ASC
LIMIT 50;
认领任务(原子操作):
sql
SELECT claim_task(2, 'agent-1');
关联任务与内存:
sql
INSERT INTO task_memory_links(task_id, memory_id, link_type)
VALUES (2, 123, 'outcome');
可选附加功能:
conditional_blocks
(尚未实现)
  • 此功能被故意推迟,直到核心工作流稳定后再实现。
  • 若现在需要,可在
    task_links.metadata
    中存储条件(例如:
    { "os": "windows" }
    ),并将其作为文档。

Wrapper scripts (recommended)

包装脚本(推荐)

To avoid re-typing SQL, use the wrapper scripts shipped with this skill:
Windows:
powershell
powershell.exe -NoProfile -ExecutionPolicy Bypass -File "scripts\tasks.ps1" ready 50
powershell.exe -NoProfile -ExecutionPolicy Bypass -File "scripts\tasks.ps1" create "Install pgvector" 1
powershell.exe -NoProfile -ExecutionPolicy Bypass -File "scripts\tasks.ps1" claim 2 agent-1
Linux/macOS:
bash
bash "scripts/tasks.sh" ready 50
bash "scripts/tasks.sh" create "Install pgvector" 1
bash "scripts/tasks.sh" claim 2 agent-1
为避免重复输入SQL,可使用本技能附带的包装脚本:
Windows:
powershell
powershell.exe -NoProfile -ExecutionPolicy Bypass -File "scripts\\tasks.ps1" ready 50
powershell.exe -NoProfile -ExecutionPolicy Bypass -File "scripts\\tasks.ps1" create "Install pgvector" 1
powershell.exe -NoProfile -ExecutionPolicy Bypass -File "scripts\\tasks.ps1" claim 2 agent-1
Linux/macOS:
bash
bash "scripts/tasks.sh" ready 50
bash "scripts/tasks.sh" create "Install pgvector" 1
bash "scripts/tasks.sh" claim 2 agent-1

Compaction Log (high value)

压缩日志(高价值)

Compaction can delete context. Treat every compaction as an important event and record it.
If you're using OpenCode, prefer the OpenCode plugin route for automatic compaction logging.
压缩可能会删除上下文。请将每次压缩视为重要事件并记录。
若使用OpenCode,建议通过OpenCode插件实现自动压缩日志。

OpenCode plugin (experimental.session.compacting)

OpenCode插件(experimental.session.compacting)

  1. Copy
    plugins/agent-memory-systems-postgres.js
    to
    ~/.config/opencode/plugins/
  2. Restart OpenCode
It writes local compaction events to:
  • ~/.config/opencode/agent-memory-systems-postgres/compaction-events.jsonl
And will also attempt a best-effort Postgres
store_memory(...)
write (requires pgpass).
  1. plugins/agent-memory-systems-postgres.js
    复制到
    ~/.config/opencode/plugins/
  2. 重启OpenCode
它会将本地压缩事件写入:
  • ~/.config/opencode/agent-memory-systems-postgres/compaction-events.jsonl
并会尝试通过Postgres的
store_memory(...)
写入(需要pgpass配置)。

Verify

验证

sql
SELECT id, title, relevance_score
FROM search_memories('compaction', NULL, NULL, NULL, NULL, 0, 10);
If nothing is inserted, set up
.pgpass
/
pgpass.conf
so
psql
can authenticate without prompting.
sql
SELECT id, title, relevance_score
FROM search_memories('compaction', NULL, NULL, NULL, NULL, 0, 10);
若未插入任何内容,请配置
.pgpass
/
pgpass.conf
,使
psql
无需交互即可认证。

Daily Compaction Consolidation

每日压缩合并

Raw compaction events are noisy. Run a daily consolidation job that turns many compaction events into 1 daily memory.
The consolidation scripts default to the OpenCode plugin event log path and will fall back to Claude Code paths if needed.
  • OpenCode events:
    ~/.config/opencode/agent-memory-systems-postgres/compaction-events.jsonl
  • Output directory:
    ~/.config/opencode/agent-memory-systems-postgres/compaction-daily/
Windows run (manual):
powershell
powershell.exe -NoProfile -ExecutionPolicy Bypass -File "scripts\consolidate-compactions.ps1"
Linux/macOS run (manual):
bash
bash "scripts/consolidate-compactions.sh"
Scheduling:
  • Windows Task Scheduler: create a daily task that runs the PowerShell command above
  • Linux cron example:
bash
undefined
原始压缩事件较为零散。请运行每日合并任务,将多个压缩事件转换为1条每日内存记录。
合并脚本默认使用OpenCode插件事件日志路径,若不存在则回退到Claude Code路径。
  • OpenCode事件:
    ~/.config/opencode/agent-memory-systems-postgres/compaction-events.jsonl
  • 输出目录:
    ~/.config/opencode/agent-memory-systems-postgres/compaction-daily/
Windows手动运行:
powershell
powershell.exe -NoProfile -ExecutionPolicy Bypass -File "scripts\\consolidate-compactions.ps1"
Linux/macOS手动运行:
bash
bash "scripts/consolidate-compactions.sh"
定时任务:
  • Windows任务计划程序:创建每日任务,运行上述PowerShell命令
  • Linux cron示例:
bash
undefined

every day at 02:10 UTC

每天UTC时间02:10运行

10 2 * * * bash "<skill-dir>/scripts/consolidate-compactions.sh" >/dev/null 2>&1
undefined
10 2 * * * bash "<技能目录>/scripts/consolidate-compactions.sh" >/dev/null 2>&1
undefined

Appendix: Claude Code compatibility (optional)

附录:Claude Code兼容性(可选)

This repository also includes Claude Code hook scripts under
hooks/
. They are not required for OpenCode usage.
本仓库还包含
hooks/
目录下的Claude Code钩子脚本。OpenCode使用无需这些脚本。

Friction Log (turn pain into tooling)

摩擦日志(将痛点转化为工具)

Whenever something is annoying, brittle, or fails:
  1. Store an
    episodic
    memory with category
    friction
    and tags for the tool/OS/error.
  2. If it repeats (2+ times), promote it to
    procedural
    memory (importance >= 7) with a checklist.
  3. Update this skill doc when the fix becomes a stable rule/workflow (so every agent learns it).
每当遇到恼人、脆弱或失败的情况:
  1. 存储一条
    episodic
    类型的内存,分类为
    friction
    ,并添加工具/操作系统/错误相关的标签。
  2. 若重复出现(2次以上),将其升级为
    procedural
    类型的内存(重要性≥7),并附带检查清单。
  3. 当修复方案成为稳定规则/工作流时,更新本技能文档(让所有Agent都能学习)。

Schema Overview

架构概述

agent_memories
— Main table. Full-text search, trigram indexes, JSONB metadata, soft-delete.
memory_links
— Graph relationships (references, supersedes, contradicts).
working_memory
— Ephemeral session context with auto-expire.
Key columns:
memory_type
,
category
,
tags[]
,
title
,
content
,
content_hash
(auto),
metadata
(JSONB),
importance_score
,
access_count
,
relevance_decay
,
search_vector
(auto).
agent_memories
— 主表。支持全文搜索、trigram索引、JSONB元数据、软删除。
memory_links
— 图关系(引用、取代、矛盾)。
working_memory
— 临时会话上下文,自动过期。
关键列:
memory_type
,
category
,
tags[]
,
title
,
content
,
content_hash
(自动生成),
metadata
(JSONB),
importance_score
,
access_count
,
relevance_decay
,
search_vector
(自动生成)。

Anti-Patterns

反模式

Don'tDo Instead
Store everythingOnly store non-obvious solutions
Skip tagsTag comprehensively: tech, error codes, platform
Use
to_tsquery
directly
search_memories()
handles this via
plainto_tsquery
One type for all dataUse correct memory_type per content
Forget importance ratingRate honestly: 9-10 battle-tested, 5-6 partial
不要做推荐做法
存储所有内容仅存储非显而易见的解决方案
跳过标签全面添加标签:技术、错误代码、平台
直接使用
to_tsquery
search_memories()
通过
plainto_tsquery
处理
所有数据使用同一种类型根据内容使用正确的memory_type
忘记重要性评分如实评分:9-10为经过实战检验,5-6为部分有效

Sharp Edges

注意事项

IssueSeverityMitigation
Chunks lose contextCriticalStore full problem+solution as one unit
Old tech memoriesHigh
apply_memory_decay()
daily; prune stale
Duplicate memoriesMedium
store_memory()
auto-deduplicates by content_hash
No vector searchInfopg_trgm provides fuzzy matching; pgvector can be added later
问题严重程度缓解措施
数据块丢失上下文严重将完整的问题+解决方案作为一个单元存储
过时技术内存每日运行
apply_memory_decay()
;清理过期内存
重复内存
store_memory()
通过content_hash自动去重
无向量搜索信息级pg_trgm提供模糊匹配;可后续添加pgvector

Cross-Platform Notes

跨平台说明

  • PostgreSQL 14-18 supported (no partitioning, no GENERATED ALWAYS)
  • pg_trgm is the only required extension (ships with all PG distributions)
  • Linux:
    psql -U postgres -d agent_memory -f init.sql
  • Windows: Use full path to psql.exe or add PG bin to PATH
  • MCP postgres_query: Works for read operations; DDL requires psql
  • 支持PostgreSQL 14-18(无需分区,无需GENERATED ALWAYS)
  • pg_trgm是唯一必需的扩展(所有PG发行版均附带)
  • Linux
    psql -U postgres -d agent_memory -f init.sql
  • Windows:使用psql.exe的完整路径,或将PG的bin目录添加到PATH
  • MCP postgres_query:支持读取操作;DDL需要使用psql

Maintenance

维护任务

sql
SELECT apply_memory_decay();                         -- daily
SELECT prune_stale_memories(180, 3.0, 0);            -- monthly
DELETE FROM working_memory WHERE expires_at < NOW();  -- daily
SELECT * FROM memory_health_check();                  -- anytime
sql
SELECT apply_memory_decay();                         -- 每日运行
SELECT prune_stale_memories(180, 3.0, 0);            -- 每月运行
DELETE FROM working_memory WHERE expires_at < NOW();  -- 每日运行
SELECT * FROM memory_health_check();                  -- 随时可运行

Optional: pgvector Semantic Search

可选:pgvector语义搜索

If pgvector is installed on your PostgreSQL server,
init.sql
will:
  • create extension
    vector
    (non-fatal if missing)
  • add
    agent_memories.embedding vector
    (variable dimension)
  • create
    search_memories_vector(p_embedding, p_embedding_dim, ...)
Notes:
  • This does NOT generate embeddings. You must populate
    agent_memories.embedding
    yourself.
  • Once embeddings exist, you can do nearest-neighbor search:
sql
-- p_embedding is a pgvector literal; pass it from your app.
-- Optionally filter by dimension (recommended when using multiple models).
SELECT id, title, similarity
FROM search_memories_vector('[0.01, 0.02, ...]'::vector, 768, NULL, NULL, NULL, NULL, 0.0, 10);
Note: variable-dimension vectors cannot be indexed with pgvector indexes. This is a tradeoff to support local models with different embedding sizes.
If pgvector is not installed, everything else still works (fts + pg_trgm).
若PostgreSQL服务器已安装pgvector,
init.sql
将:
  • 创建扩展
    vector
    (若缺失不会导致失败)
  • 添加列
    agent_memories.embedding vector
    (可变维度)
  • 创建函数
    search_memories_vector(p_embedding, p_embedding_dim, ...)
注意:
  • 本技能不生成嵌入向量。你需要自行填充
    agent_memories.embedding
一旦嵌入向量存在,即可进行最近邻搜索:
sql
-- p_embedding是pgvector字面量;从你的应用传递。
-- 可选:按维度过滤(当使用多个模型时推荐)。
SELECT id, title, similarity
FROM search_memories_vector('[0.01, 0.02, ...]'::vector, 768, NULL, NULL, NULL, NULL, 0.0, 10);
注意:可变维度向量无法使用pgvector索引。这是为了支持不同嵌入大小的本地模型而做出的权衡。
若未安装pgvector,其他功能仍可正常工作(fts + pg_trgm)。

Embedding Ingestion Pipeline

嵌入向量导入流水线

pgvector search only works after you populate
agent_memories.embedding
.
This skill ships ingestion scripts (relative paths). Run from the skill directory:
  • scripts/ingest-embeddings.ps1
  • scripts/ingest-embeddings.sh
They:
  • find memories with
    embedding IS NULL
  • call an OpenAI-compatible embeddings endpoint (including Ollama)
  • write vectors into
    agent_memories.embedding vector
Requirements:
  • pgvector installed +
    init.sql
    applied (so
    agent_memories.embedding
    exists)
  • .pgpass
    /
    pgpass.conf
    configured (so
    psql -w
    can write without prompting)
  • env vars for embedding API:
    • EMBEDDING_PROVIDER
      (
      ollama
      or
      openai
      ; default
      openai
      )
    • EMBEDDING_API_KEY
      (required for
      openai
      ; optional for
      ollama
      )
    • EMBEDDING_API_URL
      (default depends on provider)
    • EMBEDDING_MODEL
      (default depends on provider)
    • EMBEDDING_DIMENSIONS
      (optional; forwarded to the embeddings endpoint when supported)
Windows example:
powershell
$env:EMBEDDING_PROVIDER = "ollama"
$env:EMBEDDING_MODEL = "nomic-embed-text"
powershell.exe -NoProfile -ExecutionPolicy Bypass -File "scripts\ingest-embeddings.ps1" -Limit 25
Linux/macOS example:
bash
export EMBEDDING_API_KEY=...
export EMBEDDING_MODEL=text-embedding-3-small
bash "scripts/ingest-embeddings.sh"
Scheduling:
  • run daily (or hourly) after you add new memories
  • keep
    Limit
    small until you trust it
Robustness note:
  • On Windows, very long SQL strings can be fragile when passed via
    psql -c
    . The ingestion script writes per-row updates to a temporary
    .sql
    file and runs
    psql -f
    to avoid command-line length/quoting edge cases.
pgvector搜索仅在填充
agent_memories.embedding
后可用。
本技能附带导入脚本(相对路径)。从技能目录运行:
  • scripts/ingest-embeddings.ps1
  • scripts/ingest-embeddings.sh
功能:
  • 查找
    embedding IS NULL
    的内存
  • 调用兼容OpenAI的嵌入向量端点(包括Ollama)
  • 将向量写入
    agent_memories.embedding vector
要求:
  • 已安装pgvector + 已执行
    init.sql
    (确保
    agent_memories.embedding
    列存在)
  • 已配置
    .pgpass
    /
    pgpass.conf
    (使
    psql -w
    无需交互即可写入)
  • 嵌入API环境变量:
    • EMBEDDING_PROVIDER
      ollama
      openai
      ;默认
      openai
    • EMBEDDING_API_KEY
      openai
      必填;
      ollama
      可选)
    • EMBEDDING_API_URL
      (默认值取决于提供商)
    • EMBEDDING_MODEL
      (默认值取决于提供商)
    • EMBEDDING_DIMENSIONS
      (可选;若支持则传递给嵌入端点)
Windows示例:
powershell
$env:EMBEDDING_PROVIDER = "ollama"
$env:EMBEDDING_MODEL = "nomic-embed-text"
powershell.exe -NoProfile -ExecutionPolicy Bypass -File "scripts\\ingest-embeddings.ps1" -Limit 25
Linux/macOS示例:
bash
export EMBEDDING_API_KEY=...
export EMBEDDING_MODEL=text-embedding-3-small
bash "scripts/ingest-embeddings.sh"
定时任务:
  • 添加新内存后,每日(或每小时)运行
  • 在信任脚本前,将
    Limit
    设置为较小值
健壮性说明:
  • 在Windows系统上,过长的SQL字符串通过
    psql -c
    传递时可能不稳定。导入脚本会将每行更新写入临时
    .sql
    文件,并通过
    psql -f
    运行,以避免命令行长度/引用的边缘情况。

Related Skills

相关技能

systematic-debugging
,
postgres-pro
,
postgresql-table-design
systematic-debugging
,
postgres-pro
,
postgresql-table-design
",