agent-memory-systems-postgres
Compare original and translation side by side
🇺🇸
Original
English🇨🇳
Translation
ChineseAgent Memory Systems (PostgreSQL)
AI Agent内存系统(基于PostgreSQL)
Persistent shared memory for all AI agents. PostgreSQL 14+ on Linux or Windows.
Memory failures look like intelligence failures — this skill ensures the right memory is retrieved at the right time.
为所有AI Agent提供持久化共享内存支持,兼容Linux或Windows系统上的PostgreSQL 14及以上版本。
内存故障会表现为智能失效——本技能确保在正确的时间检索到正确的内存。
Quick Start
快速开始
Database and all functions are created by in this skill directory.
agent_memoryinit.sqlbash
undefined数据库及所有函数由本技能目录下的创建。
agent_memoryinit.sqlbash
undefinedLinux
Linux系统
psql -U postgres -c "CREATE DATABASE agent_memory;"
psql -U postgres -d agent_memory -f init.sql
psql -U postgres -c "CREATE DATABASE agent_memory;"
psql -U postgres -d agent_memory -f init.sql
Windows (adjust path to your psql.exe)
Windows系统(请调整psql.exe的路径)
& "C:\Program Files\PostgreSQL\18\bin\psql.exe" -U postgres -c "CREATE DATABASE agent_memory;"
& "C:\Program Files\PostgreSQL\18\bin\psql.exe" -U postgres -d agent_memory -f init.sql
Verify: `SELECT * FROM memory_health_check();`& "C:\Program Files\PostgreSQL\18\bin\psql.exe" -U postgres -c "CREATE DATABASE agent_memory;"
& "C:\Program Files\PostgreSQL\18\bin\psql.exe" -U postgres -d agent_memory -f init.sql
验证:`SELECT * FROM memory_health_check();`Pure Skill Mode (default)
纯技能模式(默认)
This skill works without installing any plugin. In pure skill mode:
- you manually run scripts when you want (progressive disclosure)
- no global OpenCode config is modified automatically
本技能无需安装任何插件即可运行。在纯技能模式下:
- 你可以手动按需运行脚本(渐进式披露)
- 不会自动修改全局OpenCode配置
Optional bootstrap (asks + records choices + tries to install)
可选引导流程(询问并记录选择,尝试安装组件)
Notes:
- Interactive mode defaults to NOT installing heavy optional components.
- Use /
-InstallAllonly when you're ready to install everything.--install-all
Run the bootstrap script to choose optional components (pgpass, local embeddings, pgvector) and record decisions.
Windows:
powershell
undefined注意事项:
- 交互模式默认不会安装大型可选组件。
- 仅当你准备好安装全部组件时,才使用/
-InstallAll参数。--install-all
运行引导脚本以选择可选组件(pgpass、本地嵌入、pgvector)并记录你的选择。
Windows系统:
powershell
undefinedrun from the skill directory
从技能目录运行
powershell.exe -NoProfile -ExecutionPolicy Bypass -File "scripts\bootstrap.ps1"
Linux/macOS:
```bashpowershell.exe -NoProfile -ExecutionPolicy Bypass -File "scripts\bootstrap.ps1"
Linux/macOS系统:
```bashrun from the skill directory
从技能目录运行
bash "scripts/bootstrap.sh"
The selection record is stored at:
- `~/.config/opencode/agent-memory-systems-postgres/setup.json`
Agent rule:
- If this file does not exist, ask the user if they want to enable optional components.
- Recommended: run bootstrap with all options enabled (then fix any failures it reports).
On Windows, pgvector installation follows the official pgvector instructions (Visual Studio C++ + `nmake /F Makefile.win`). The bootstrap will attempt to install prerequisites via `winget`.bash "scripts/bootstrap.sh"
选择记录将存储在:
- `~/.config/opencode/agent-memory-systems-postgres/setup.json`
Agent规则:
- 若该文件不存在,需询问用户是否要启用可选组件。
- 推荐:启用所有选项运行引导脚本(然后修复报告的任何故障)。
在Windows系统上,pgvector的安装遵循官方说明(需要Visual Studio C++ + `nmake /F Makefile.win`)。引导脚本会尝试通过`winget`安装依赖项。Optional automation: compaction logging (OpenCode plugin)
可选自动化:压缩日志(OpenCode插件)
If you want automatic compaction logging, install the OpenCode plugin template shipped with this skill.
- Copy to
plugins/agent-memory-systems-postgres.js~/.config/opencode/plugins/ - Restart OpenCode
若要实现自动压缩日志,请安装本技能附带的OpenCode插件模板。
- 将复制到
plugins/agent-memory-systems-postgres.js~/.config/opencode/plugins/ - 重启OpenCode
Credentials (psql)
凭据管理(psql)
Do NOT hardcode passwords in scripts, skill docs, or config files.
Recommended options for non-interactive :
psql- /
.pgpass(recommended)pgpass.conf- Linux/macOS: (must be
~/.pgpassor libpq will ignore it)chmod 0600 ~/.pgpass - Windows: (example:
%APPDATA%\postgresql\pgpass.conf)C:\Users\<you>\AppData\Roaming\postgresql\pgpass.conf - Format:
hostname:port:database:username:password - Docs: https://www.postgresql.org/docs/current/libpq-pgpass.html
- Linux/macOS:
- (optional override): point to a custom location for the password file
PGPASSFILE - (not recommended): only for quick local testing; environment variables can leak on some systems
PGPASSWORD
Tip: set connection defaults once (per shell) to shorten commands:
bash
export PGHOST=localhost
export PGPORT=5432
export PGDATABASE=agent_memory
export PGUSER=postgres请勿在脚本、技能文档或配置文件中硬编码密码。
非交互式的推荐选项:
psql- /
.pgpass(推荐)pgpass.conf- Linux/macOS:(必须设置
~/.pgpass,否则libpq会忽略它)chmod 0600 ~/.pgpass - Windows:(示例:
%APPDATA%\\postgresql\\pgpass.conf)C:\\Users\\<你>\\AppData\\Roaming\\postgresql\\pgpass.conf - 格式:
hostname:port:database:username:password - 文档:https://www.postgresql.org/docs/current/libpq-pgpass.html
- Linux/macOS:
- (可选覆盖):指定密码文件的自定义路径
PGPASSFILE - (不推荐):仅用于快速本地测试;环境变量在某些系统中可能泄露
PGPASSWORD
小贴士:一次性设置连接默认值(每个会话)以缩短命令:
bash
export PGHOST=localhost
export PGPORT=5432
export PGDATABASE=agent_memory
export PGUSER=postgresOne-time setup helper scripts
一次性设置辅助脚本
This skill ships helper scripts (relative paths):
scripts/setup-pgpass.ps1scripts/setup-pgpass.sh
OpenCode usage: run them from the skill directory.
Windows run:
powershell
powershell.exe -NoProfile -ExecutionPolicy Bypass -File "scripts\setup-pgpass.ps1"Linux/macOS run:
bash
bash "scripts/setup-pgpass.sh"本技能附带辅助脚本(相对路径):
scripts/setup-pgpass.ps1scripts/setup-pgpass.sh
OpenCode使用方式:从技能目录运行它们。
Windows运行:
powershell
powershell.exe -NoProfile -ExecutionPolicy Bypass -File "scripts\\setup-pgpass.ps1"Linux/macOS运行:
bash
bash "scripts/setup-pgpass.sh"Memory Types
内存类型
| Type | Lifespan | Use When |
|---|---|---|
| 24h auto-expire | Current conversation context (requires |
| Permanent + decay | Problem-solving experiences, debugging sessions |
| Permanent | Extracted facts, knowledge, patterns |
| Permanent | Step-by-step procedures, checklists (importance >= 7) |
| 类型 | 生命周期 | 使用场景 |
|---|---|---|
| 24小时自动过期 | 当前对话上下文(需指定 |
| 永久存储+衰减机制 | 问题解决经验、调试会话 |
| 永久存储 | 提取的事实、知识、模式 |
| 永久存储 | 分步流程、检查清单(重要性≥7) |
Core Functions
核心函数
store_memory(type, category, tags[], title, content, metadata, agent_id, session_id, importance)
store_memory(type, category, tags[], title, content, metadata, agent_id, session_id, importance)store_memory(type, category, tags[], title, content, metadata, agent_id, session_id, importance)
store_memory(type, category, tags[], title, content, metadata, agent_id, session_id, importance)Auto-deduplicates by content hash. Duplicate inserts bump and .
access_countimportance_scoresql
SELECT store_memory(
'semantic',
'windows-networking',
ARRAY['ssh', 'tunnel', 'port-conflict'],
'SSH Tunnel Port Conflict Resolution',
'Fix: 1) taskkill /F /IM ssh.exe 2) Use processId not pid 3) Wait 3s',
'{"os": "Windows 11"}',
'sisyphus',
NULL,
9.0
);通过内容哈希自动去重。重复插入会增加和。
access_countimportance_scoresql
SELECT store_memory(
'semantic',
'windows-networking',
ARRAY['ssh', 'tunnel', 'port-conflict'],
'SSH Tunnel Port Conflict Resolution',
'Fix: 1) taskkill /F /IM ssh.exe 2) Use processId not pid 3) Wait 3s',
'{"os": "Windows 11"}',
'sisyphus',
NULL,
9.0
);search_memories(query, types[], categories[], tags[], agent_id, min_importance, limit)
search_memories(query, types[], categories[], tags[], agent_id, min_importance, limit)search_memories(query, types[], categories[], tags[], agent_id, min_importance, limit)
search_memories(query, types[], categories[], tags[], agent_id, min_importance, limit)Hybrid search: full-text (tsvector) + trigram similarity (pg_trgm) + tag filtering.
Accepts plain English queries — no tsquery syntax needed.
Relevance scoring: .
text_score * decay * recency * importancesql
-- Natural language
SELECT * FROM search_memories('ssh tunnel port conflict', NULL, NULL, NULL, NULL, 7.0, 5);
-- Filter by type + tags
SELECT * FROM search_memories(
'troubleshooting steps',
ARRAY['procedural']::memory_type[],
NULL,
ARRAY['ssh'],
NULL, 0.0, 5
);Returns:
Where is one of: , , , .
id, memory_type, category, title, content, importance_score, relevance_score, match_typematch_typefulltexttrigram_titletrigram_contentmetadata混合搜索:全文检索(tsvector)+ trigram相似度(pg_trgm)+标签过滤。
接受自然英文查询——无需使用tsquery语法。
相关性评分:。
text_score * decay * recency * importancesql
-- 自然语言查询
SELECT * FROM search_memories('ssh tunnel port conflict', NULL, NULL, NULL, NULL, 7.0, 5);
-- 按类型+标签过滤
SELECT * FROM search_memories(
'troubleshooting steps',
ARRAY['procedural']::memory_type[],
NULL,
ARRAY['ssh'],
NULL, 0.0, 5
);返回字段:
其中包括:, , , 。
id, memory_type, category, title, content, importance_score, relevance_score, match_typematch_typefulltexttrigram_titletrigram_contentmetadatamemory_health_check()
memory_health_check()memory_health_check()
memory_health_check()Returns: for , , .
metric | value | statustotal_memoriesavg_importancestale_count返回:,包含, , 。
metric | value | statustotal_memoriesavg_importancestale_countapply_memory_decay()
apply_memory_decay()apply_memory_decay()
apply_memory_decay()Decays episodic memories by . Run daily.
0.9999^days_since_access按衰减情景记忆。建议每日运行。
0.9999^days_since_accessprune_stale_memories(age_days, max_importance, max_access_count)
prune_stale_memories(age_days, max_importance, max_access_count)prune_stale_memories(age_days, max_importance, max_access_count)
prune_stale_memories(age_days, max_importance, max_access_count)Soft-deletes old episodic memories below thresholds. Default: 180 days, importance <= 3, never accessed.
软删除低于阈值的旧情景记忆。默认值:180天,重要性≤3,从未访问过。
Agent Workflow
Agent工作流
Before a task
任务开始前
sql
SELECT id, title, content, relevance_score
FROM search_memories('keywords from user request', NULL, NULL, NULL, NULL, 5.0, 5);If relevant memories found, reference them: "Based on past experience (memory #1)..."
sql
SELECT id, title, content, relevance_score
FROM search_memories('用户请求中的关键词', NULL, NULL, NULL, NULL, 5.0, 5);若找到相关内存,可引用:“基于过往经验(内存#1)...”
After solving a problem
问题解决后
sql
SELECT store_memory(
'semantic',
'category-name',
ARRAY['tag1', 'tag2', 'tag3'],
'One-line problem summary',
'Detailed problem + solution',
'{"os": "...", "tools": [...]}',
'agent-name',
NULL,
8.0
);sql
SELECT store_memory(
'semantic',
'分类名称',
ARRAY['标签1', '标签2', '标签3'],
'一行问题摘要',
'详细问题+解决方案',
'{"os": "...", "tools": [...]}',
'agent名称',
NULL,
8.0
);When delegating to subagents
委托给子Agent时
Include in prompt:
MUST DO FIRST:
Search agent_memories: SELECT * FROM search_memories('relevant keywords', NULL, NULL, NULL, NULL, 5.0, 5);
MUST DO AFTER:
If you solved something new, store it with store_memory(...)在提示词中包含:
必须首先执行:
搜索agent_memories:SELECT * FROM search_memories('相关关键词', NULL, NULL, NULL, NULL, 5.0, 5);
必须最后执行:
若解决了新问题,请使用store_memory(...)存储Task Memory Layer (optional)
任务内存层(可选)
This skill also ships a minimal task/issue layer inspired by Beads: graph semantics + deterministic "ready work" queries.
Objects:
- : tasks (status, priority, assignee)
agent_tasks - : typed links (
task_links,blocks,parent_child, etc.)related - : materialized cache to make ready queries fast
blocked_tasks_cache - : link tasks to memories (
task_memory_links) for outcomes/notesagent_memories
Create tasks:
sql
INSERT INTO agent_tasks(title, description, created_by, priority)
VALUES ('Install pgvector', 'Windows build + enable extension', 'user', 1);Add dependencies:
sql
-- Task 1 blocks task 2
INSERT INTO task_links(from_task_id, to_task_id, link_type)
VALUES (1, 2, 'blocks');
-- Task 2 is parent of task 3 (used for transitive blocking)
INSERT INTO task_links(from_task_id, to_task_id, link_type)
VALUES (2, 3, 'parent_child');Rebuild blocked cache (usually auto via triggers):
sql
SELECT rebuild_blocked_tasks_cache();Ready work query:
sql
SELECT id, title, priority
FROM agent_tasks t
WHERE t.deleted_at IS NULL
AND t.status IN ('open','in_progress')
AND NOT EXISTS (SELECT 1 FROM blocked_tasks_cache b WHERE b.task_id = t.id)
ORDER BY priority ASC, updated_at ASC
LIMIT 50;Claim a task (atomic):
sql
SELECT claim_task(2, 'agent-1');Link a task to a memory:
sql
INSERT INTO task_memory_links(task_id, memory_id, link_type)
VALUES (2, 123, 'outcome');Optional add-on: (not implemented yet)
conditional_blocks- This is intentionally deferred until the core workflow feels solid.
- If you need it now, store a condition in (e.g.,
task_links.metadata) and treat it as documentation.{ "os": "windows" }
本技能还附带一个受Beads启发的极简任务/问题层:图语义+确定性“就绪工作”查询。
对象:
- :任务(状态、优先级、经办人)
agent_tasks - :类型化链接(
task_links,blocks,parent_child等)related - :物化缓存,加速就绪查询
blocked_tasks_cache - :将任务与内存(
task_memory_links)关联,用于记录结果/备注agent_memories
创建任务:
sql
INSERT INTO agent_tasks(title, description, created_by, priority)
VALUES ('Install pgvector', 'Windows build + enable extension', 'user', 1);添加依赖:
sql
-- 任务1阻塞任务2
INSERT INTO task_links(from_task_id, to_task_id, link_type)
VALUES (1, 2, 'blocks');
-- 任务2是任务3的父任务(用于传递阻塞关系)
INSERT INTO task_links(from_task_id, to_task_id, link_type)
VALUES (2, 3, 'parent_child');重建阻塞缓存(通常由触发器自动完成):
sql
SELECT rebuild_blocked_tasks_cache();就绪工作查询:
sql
SELECT id, title, priority
FROM agent_tasks t
WHERE t.deleted_at IS NULL
AND t.status IN ('open','in_progress')
AND NOT EXISTS (SELECT 1 FROM blocked_tasks_cache b WHERE b.task_id = t.id)
ORDER BY priority ASC, updated_at ASC
LIMIT 50;认领任务(原子操作):
sql
SELECT claim_task(2, 'agent-1');关联任务与内存:
sql
INSERT INTO task_memory_links(task_id, memory_id, link_type)
VALUES (2, 123, 'outcome');可选附加功能:(尚未实现)
conditional_blocks- 此功能被故意推迟,直到核心工作流稳定后再实现。
- 若现在需要,可在中存储条件(例如:
task_links.metadata),并将其作为文档。{ "os": "windows" }
Wrapper scripts (recommended)
包装脚本(推荐)
To avoid re-typing SQL, use the wrapper scripts shipped with this skill:
Windows:
powershell
powershell.exe -NoProfile -ExecutionPolicy Bypass -File "scripts\tasks.ps1" ready 50
powershell.exe -NoProfile -ExecutionPolicy Bypass -File "scripts\tasks.ps1" create "Install pgvector" 1
powershell.exe -NoProfile -ExecutionPolicy Bypass -File "scripts\tasks.ps1" claim 2 agent-1Linux/macOS:
bash
bash "scripts/tasks.sh" ready 50
bash "scripts/tasks.sh" create "Install pgvector" 1
bash "scripts/tasks.sh" claim 2 agent-1为避免重复输入SQL,可使用本技能附带的包装脚本:
Windows:
powershell
powershell.exe -NoProfile -ExecutionPolicy Bypass -File "scripts\\tasks.ps1" ready 50
powershell.exe -NoProfile -ExecutionPolicy Bypass -File "scripts\\tasks.ps1" create "Install pgvector" 1
powershell.exe -NoProfile -ExecutionPolicy Bypass -File "scripts\\tasks.ps1" claim 2 agent-1Linux/macOS:
bash
bash "scripts/tasks.sh" ready 50
bash "scripts/tasks.sh" create "Install pgvector" 1
bash "scripts/tasks.sh" claim 2 agent-1Compaction Log (high value)
压缩日志(高价值)
Compaction can delete context. Treat every compaction as an important event and record it.
If you're using OpenCode, prefer the OpenCode plugin route for automatic compaction logging.
压缩可能会删除上下文。请将每次压缩视为重要事件并记录。
若使用OpenCode,建议通过OpenCode插件实现自动压缩日志。
OpenCode plugin (experimental.session.compacting)
OpenCode插件(experimental.session.compacting)
- Copy to
plugins/agent-memory-systems-postgres.js~/.config/opencode/plugins/ - Restart OpenCode
It writes local compaction events to:
~/.config/opencode/agent-memory-systems-postgres/compaction-events.jsonl
And will also attempt a best-effort Postgres write (requires pgpass).
store_memory(...)- 将复制到
plugins/agent-memory-systems-postgres.js~/.config/opencode/plugins/ - 重启OpenCode
它会将本地压缩事件写入:
~/.config/opencode/agent-memory-systems-postgres/compaction-events.jsonl
并会尝试通过Postgres的写入(需要pgpass配置)。
store_memory(...)Verify
验证
sql
SELECT id, title, relevance_score
FROM search_memories('compaction', NULL, NULL, NULL, NULL, 0, 10);If nothing is inserted, set up / so can authenticate without prompting.
.pgpasspgpass.confpsqlsql
SELECT id, title, relevance_score
FROM search_memories('compaction', NULL, NULL, NULL, NULL, 0, 10);若未插入任何内容,请配置 / ,使无需交互即可认证。
.pgpasspgpass.confpsqlDaily Compaction Consolidation
每日压缩合并
Raw compaction events are noisy. Run a daily consolidation job that turns many compaction events into 1 daily memory.
The consolidation scripts default to the OpenCode plugin event log path and will fall back to Claude Code paths if needed.
- OpenCode events:
~/.config/opencode/agent-memory-systems-postgres/compaction-events.jsonl - Output directory:
~/.config/opencode/agent-memory-systems-postgres/compaction-daily/
Windows run (manual):
powershell
powershell.exe -NoProfile -ExecutionPolicy Bypass -File "scripts\consolidate-compactions.ps1"Linux/macOS run (manual):
bash
bash "scripts/consolidate-compactions.sh"Scheduling:
- Windows Task Scheduler: create a daily task that runs the PowerShell command above
- Linux cron example:
bash
undefined原始压缩事件较为零散。请运行每日合并任务,将多个压缩事件转换为1条每日内存记录。
合并脚本默认使用OpenCode插件事件日志路径,若不存在则回退到Claude Code路径。
- OpenCode事件:
~/.config/opencode/agent-memory-systems-postgres/compaction-events.jsonl - 输出目录:
~/.config/opencode/agent-memory-systems-postgres/compaction-daily/
Windows手动运行:
powershell
powershell.exe -NoProfile -ExecutionPolicy Bypass -File "scripts\\consolidate-compactions.ps1"Linux/macOS手动运行:
bash
bash "scripts/consolidate-compactions.sh"定时任务:
- Windows任务计划程序:创建每日任务,运行上述PowerShell命令
- Linux cron示例:
bash
undefinedevery day at 02:10 UTC
每天UTC时间02:10运行
10 2 * * * bash "<skill-dir>/scripts/consolidate-compactions.sh" >/dev/null 2>&1
undefined10 2 * * * bash "<技能目录>/scripts/consolidate-compactions.sh" >/dev/null 2>&1
undefinedAppendix: Claude Code compatibility (optional)
附录:Claude Code兼容性(可选)
This repository also includes Claude Code hook scripts under . They are not required for OpenCode usage.
hooks/本仓库还包含目录下的Claude Code钩子脚本。OpenCode使用无需这些脚本。
hooks/Friction Log (turn pain into tooling)
摩擦日志(将痛点转化为工具)
Whenever something is annoying, brittle, or fails:
- Store an memory with category
episodicand tags for the tool/OS/error.friction - If it repeats (2+ times), promote it to memory (importance >= 7) with a checklist.
procedural - Update this skill doc when the fix becomes a stable rule/workflow (so every agent learns it).
每当遇到恼人、脆弱或失败的情况:
- 存储一条类型的内存,分类为
episodic,并添加工具/操作系统/错误相关的标签。friction - 若重复出现(2次以上),将其升级为类型的内存(重要性≥7),并附带检查清单。
procedural - 当修复方案成为稳定规则/工作流时,更新本技能文档(让所有Agent都能学习)。
Schema Overview
架构概述
agent_memoriesmemory_linksworking_memoryKey columns: , , , , , (auto), (JSONB), , , , (auto).
memory_typecategorytags[]titlecontentcontent_hashmetadataimportance_scoreaccess_countrelevance_decaysearch_vectoragent_memoriesmemory_linksworking_memory关键列:, , , , , (自动生成), (JSONB), , , , (自动生成)。
memory_typecategorytags[]titlecontentcontent_hashmetadataimportance_scoreaccess_countrelevance_decaysearch_vectorAnti-Patterns
反模式
| Don't | Do Instead |
|---|---|
| Store everything | Only store non-obvious solutions |
| Skip tags | Tag comprehensively: tech, error codes, platform |
Use | |
| One type for all data | Use correct memory_type per content |
| Forget importance rating | Rate honestly: 9-10 battle-tested, 5-6 partial |
| 不要做 | 推荐做法 |
|---|---|
| 存储所有内容 | 仅存储非显而易见的解决方案 |
| 跳过标签 | 全面添加标签:技术、错误代码、平台 |
直接使用 | |
| 所有数据使用同一种类型 | 根据内容使用正确的memory_type |
| 忘记重要性评分 | 如实评分:9-10为经过实战检验,5-6为部分有效 |
Sharp Edges
注意事项
| Issue | Severity | Mitigation |
|---|---|---|
| Chunks lose context | Critical | Store full problem+solution as one unit |
| Old tech memories | High | |
| Duplicate memories | Medium | |
| No vector search | Info | pg_trgm provides fuzzy matching; pgvector can be added later |
| 问题 | 严重程度 | 缓解措施 |
|---|---|---|
| 数据块丢失上下文 | 严重 | 将完整的问题+解决方案作为一个单元存储 |
| 过时技术内存 | 高 | 每日运行 |
| 重复内存 | 中 | |
| 无向量搜索 | 信息级 | pg_trgm提供模糊匹配;可后续添加pgvector |
Cross-Platform Notes
跨平台说明
- PostgreSQL 14-18 supported (no partitioning, no GENERATED ALWAYS)
- pg_trgm is the only required extension (ships with all PG distributions)
- Linux:
psql -U postgres -d agent_memory -f init.sql - Windows: Use full path to psql.exe or add PG bin to PATH
- MCP postgres_query: Works for read operations; DDL requires psql
- 支持PostgreSQL 14-18(无需分区,无需GENERATED ALWAYS)
- pg_trgm是唯一必需的扩展(所有PG发行版均附带)
- Linux:
psql -U postgres -d agent_memory -f init.sql - Windows:使用psql.exe的完整路径,或将PG的bin目录添加到PATH
- MCP postgres_query:支持读取操作;DDL需要使用psql
Maintenance
维护任务
sql
SELECT apply_memory_decay(); -- daily
SELECT prune_stale_memories(180, 3.0, 0); -- monthly
DELETE FROM working_memory WHERE expires_at < NOW(); -- daily
SELECT * FROM memory_health_check(); -- anytimesql
SELECT apply_memory_decay(); -- 每日运行
SELECT prune_stale_memories(180, 3.0, 0); -- 每月运行
DELETE FROM working_memory WHERE expires_at < NOW(); -- 每日运行
SELECT * FROM memory_health_check(); -- 随时可运行Optional: pgvector Semantic Search
可选:pgvector语义搜索
If pgvector is installed on your PostgreSQL server, will:
init.sql- create extension (non-fatal if missing)
vector - add (variable dimension)
agent_memories.embedding vector - create
search_memories_vector(p_embedding, p_embedding_dim, ...)
Notes:
- This does NOT generate embeddings. You must populate yourself.
agent_memories.embedding - Once embeddings exist, you can do nearest-neighbor search:
sql
-- p_embedding is a pgvector literal; pass it from your app.
-- Optionally filter by dimension (recommended when using multiple models).
SELECT id, title, similarity
FROM search_memories_vector('[0.01, 0.02, ...]'::vector, 768, NULL, NULL, NULL, NULL, 0.0, 10);Note: variable-dimension vectors cannot be indexed with pgvector indexes. This is a tradeoff to support local models with different embedding sizes.
If pgvector is not installed, everything else still works (fts + pg_trgm).
若PostgreSQL服务器已安装pgvector,将:
init.sql- 创建扩展(若缺失不会导致失败)
vector - 添加列(可变维度)
agent_memories.embedding vector - 创建函数
search_memories_vector(p_embedding, p_embedding_dim, ...)
注意:
- 本技能不生成嵌入向量。你需要自行填充。
agent_memories.embedding
一旦嵌入向量存在,即可进行最近邻搜索:
sql
-- p_embedding是pgvector字面量;从你的应用传递。
-- 可选:按维度过滤(当使用多个模型时推荐)。
SELECT id, title, similarity
FROM search_memories_vector('[0.01, 0.02, ...]'::vector, 768, NULL, NULL, NULL, NULL, 0.0, 10);注意:可变维度向量无法使用pgvector索引。这是为了支持不同嵌入大小的本地模型而做出的权衡。
若未安装pgvector,其他功能仍可正常工作(fts + pg_trgm)。
Embedding Ingestion Pipeline
嵌入向量导入流水线
pgvector search only works after you populate .
agent_memories.embeddingThis skill ships ingestion scripts (relative paths). Run from the skill directory:
scripts/ingest-embeddings.ps1scripts/ingest-embeddings.sh
They:
- find memories with
embedding IS NULL - call an OpenAI-compatible embeddings endpoint (including Ollama)
- write vectors into
agent_memories.embedding vector
Requirements:
- pgvector installed + applied (so
init.sqlexists)agent_memories.embedding - /
.pgpassconfigured (sopgpass.confcan write without prompting)psql -w - env vars for embedding API:
- (
EMBEDDING_PROVIDERorollama; defaultopenai)openai - (required for
EMBEDDING_API_KEY; optional foropenai)ollama - (default depends on provider)
EMBEDDING_API_URL - (default depends on provider)
EMBEDDING_MODEL - (optional; forwarded to the embeddings endpoint when supported)
EMBEDDING_DIMENSIONS
Windows example:
powershell
$env:EMBEDDING_PROVIDER = "ollama"
$env:EMBEDDING_MODEL = "nomic-embed-text"
powershell.exe -NoProfile -ExecutionPolicy Bypass -File "scripts\ingest-embeddings.ps1" -Limit 25Linux/macOS example:
bash
export EMBEDDING_API_KEY=...
export EMBEDDING_MODEL=text-embedding-3-small
bash "scripts/ingest-embeddings.sh"Scheduling:
- run daily (or hourly) after you add new memories
- keep small until you trust it
Limit
Robustness note:
- On Windows, very long SQL strings can be fragile when passed via . The ingestion script writes per-row updates to a temporary
psql -cfile and runs.sqlto avoid command-line length/quoting edge cases.psql -f
pgvector搜索仅在填充后可用。
agent_memories.embedding本技能附带导入脚本(相对路径)。从技能目录运行:
scripts/ingest-embeddings.ps1scripts/ingest-embeddings.sh
功能:
- 查找的内存
embedding IS NULL - 调用兼容OpenAI的嵌入向量端点(包括Ollama)
- 将向量写入
agent_memories.embedding vector
要求:
- 已安装pgvector + 已执行(确保
init.sql列存在)agent_memories.embedding - 已配置/
.pgpass(使pgpass.conf无需交互即可写入)psql -w - 嵌入API环境变量:
- (
EMBEDDING_PROVIDER或ollama;默认openai)openai - (
EMBEDDING_API_KEY必填;openai可选)ollama - (默认值取决于提供商)
EMBEDDING_API_URL - (默认值取决于提供商)
EMBEDDING_MODEL - (可选;若支持则传递给嵌入端点)
EMBEDDING_DIMENSIONS
Windows示例:
powershell
$env:EMBEDDING_PROVIDER = "ollama"
$env:EMBEDDING_MODEL = "nomic-embed-text"
powershell.exe -NoProfile -ExecutionPolicy Bypass -File "scripts\\ingest-embeddings.ps1" -Limit 25Linux/macOS示例:
bash
export EMBEDDING_API_KEY=...
export EMBEDDING_MODEL=text-embedding-3-small
bash "scripts/ingest-embeddings.sh"定时任务:
- 添加新内存后,每日(或每小时)运行
- 在信任脚本前,将设置为较小值
Limit
健壮性说明:
- 在Windows系统上,过长的SQL字符串通过传递时可能不稳定。导入脚本会将每行更新写入临时
psql -c文件,并通过.sql运行,以避免命令行长度/引用的边缘情况。psql -f
Related Skills
相关技能
systematic-debuggingpostgres-propostgresql-table-designsystematic-debuggingpostgres-propostgresql-table-design