remembering
Compare original and translation side by side
🇺🇸
Original
English🇨🇳
Translation
ChineseRemembering - Advanced Operations
记忆功能 - 高级操作
Basic patterns are in project instructions. This skill covers advanced features and edge cases.
For development context, see references/CLAUDE.md.
基础模式请查看项目说明。 本技能涵盖高级功能与边缘情况。
如需开发背景信息,请查看 references/CLAUDE.md。
Two-Table Architecture
双表架构
| Table | Purpose | Growth |
|---|---|---|
| Stable operational state (profile + ops + journal) | Small, mostly static |
| Timestamped observations | Unbounded |
Config loads fast at startup. Memories are queried as needed.
| 表名 | 用途 | 增长特性 |
|---|---|---|
| 稳定运行状态(配置文件 + 操作 + 日志) | 体积小,基本静态 |
| 带时间戳的观测记录 | 无上限增长 |
Config表在启动时快速加载。Memories表按需查询。
Boot Sequence
启动流程
Load context at conversation start to maintain continuity across sessions.
python
from scripts import boot
print(boot())Performance: ~150ms (single HTTP request). All queries go directly to Turso.
Boot includes a section reporting GitHub access and installed utilities. See references/advanced-operations.md for details.
# CAPABILITIES在对话开始时加载上下文,以维持跨会话的连续性。
python
from scripts import boot
print(boot())性能: 约150ms(单次HTTP请求)。所有查询直接发送至Turso。
启动流程包含部分,用于报告GitHub访问权限和已安装的工具。详情请见 references/advanced-operations.md。
# CAPABILITIESMemory Type System
内存类型系统
Type is required on all write operations. Valid types:
| Type | Use For | Defaults |
|---|---|---|
| Explicit choices: prefers X, always/never do Y | conf=0.8 |
| External facts: tasks, deadlines, project state | |
| Errors, bugs, unexpected behavior | |
| General observations, catch-all | |
| Workflows, step-by-step processes, decision trees | conf=0.9, priority=1 |
python
from scripts import TYPES # {'decision', 'world', 'anomaly', 'experience', 'procedure'}所有写入操作必须指定类型。有效类型如下:
| 类型 | 适用场景 | 默认值 |
|---|---|---|
| 明确的选择:偏好X、始终/绝不做Y | conf=0.8 |
| 外部事实:任务、截止日期、项目状态 | |
| 错误、BUG、意外行为 | |
| 一般性观测记录,通用类型 | |
| 工作流、分步流程、决策树 | conf=0.9, priority=1 |
python
from scripts import TYPES # {'decision', 'world', 'anomaly', 'experience', 'procedure'}Procedural Memories (v4.4.0)
流程型内存(v4.4.0)
Store reusable workflows and operational patterns as first-class memories:
python
from scripts import remember将可复用的工作流和操作模式作为一等内存存储:
python
from scripts import rememberStore a workflow
存储工作流
id = remember(
"Deploy workflow: 1) Run tests 2) Build artifacts 3) Push to staging 4) Smoke test 5) Promote to prod",
"procedure",
tags=["deployment", "workflow"],
)
id = remember(
"部署工作流:1) 运行测试 2) 构建制品 3) 推送至预发布环境 4) 冒烟测试 5) 发布至生产环境",
"procedure",
tags=["deployment", "workflow"],
)
Retrieve workflows
检索工作流
procedures = recall(type="procedure", tags=["deployment"])
Procedural memories default to `confidence=0.9` and `priority=1` (important), ensuring they survive age-based pruning. Use tags to categorize by domain and workflow name for targeted retrieval.procedures = recall(type="procedure", tags=["deployment"])
流程型内存默认`confidence=0.9`和`priority=1`(高优先级),确保它们不会因基于时长的清理而被删除。使用标签按领域和工作流名称分类,以便精准检索。Core Operations
核心操作
Remember
存储记忆(Remember)
python
from scripts import remember, remember_bg, flushpython
from scripts import remember, remember_bg, flushBlocking write (default)
阻塞式写入(默认)
id = remember("User prefers dark mode", "decision", tags=["ui"], conf=0.9)
id = remember("用户偏好深色模式", "decision", tags=["ui"], conf=0.9)
Background write (non-blocking)
后台写入(非阻塞)
remember("Quick note", "world", sync=False)
remember("快速笔记", "world", sync=False)
Ensure all background writes complete before conversation ends
对话结束前确保所有后台写入完成
flush()
**When to use sync=False:** Storing derived insights during active work, when latency matters.
**When to use sync=True (default):** User explicitly requests storage, critical memories, handoffs.flush()
**何时使用sync=False:** 在活跃工作过程中存储衍生见解,且对延迟敏感的场景。
**何时使用sync=True(默认):** 用户明确要求存储、关键记忆、会话交接场景。Recall
回忆记忆(Recall)
python
from scripts import recallpython
from scripts import recallFTS5 search with BM25 ranking + Porter stemmer
基于FTS5的搜索,使用BM25排序 + Porter词干分析器
memories = recall("dark mode")
memories = recall("深色模式")
Filtered queries
过滤查询
decisions = recall(type="decision", conf=0.85, n=20)
tasks = recall("API", tags=["task"], n=15)
urgent = recall(tags=["task", "urgent"], tag_mode="all", n=10)
decisions = recall(type="decision", conf=0.85, n=20)
tasks = recall("API", tags=["task"], n=15)
urgent = recall(tags=["task", "urgent"], tag_mode="all", n=10)
Comprehensive retrieval (v4.1.0)
全面检索(v4.1.0)
all_memories = recall(fetch_all=True, n=1000) # Get all memories without search filtering
all_memories = recall(fetch_all=True, n=1000) # 获取所有记忆,无需搜索过滤
Time-windowed queries (v4.3.0) - since/until with inclusive bounds
时间窗口查询(v4.3.0)- since/until为包含边界
recent = recall("API", since="2025-02-01")
jan_memories = recall(since="2025-01-01", until="2025-01-31T23:59:59Z")
recent = recall("API", since="2025-02-01")
jan_memories = recall(since="2025-01-01", until="2025-01-31T23:59:59Z")
Multi-tag convenience (v4.3.0)
多标签便捷查询(v4.3.0)
both = recall(tags_all=["correction", "bsky"]) # AND: must have all tags
either = recall(tags_any=["therapy", "self-improvement"]) # OR: any tag matches
both = recall(tags_all=["correction", "bsky"]) # 逻辑与:必须包含所有标签
either = recall(tags_any=["therapy", "self-improvement"]) # 逻辑或:匹配任一标签
Wildcard patterns are NOT supported - use fetch_all instead
不支持通配符模式 - 请使用fetch_all替代
recall("*", n=1000) # ❌ Raises ValueError
recall("*", n=1000) # ❌ 抛出ValueError
recall(fetch_all=True, n=1000) # ✅ Correct approach
recall(fetch_all=True, n=1000) # ✅ 正确方式
Results return as `MemoryResult` objects with attribute and dict access. Common aliases (`m.content` -> `m.summary`, `m.conf` -> `m.confidence`) resolve transparently.
结果以`MemoryResult`对象返回,支持属性和字典访问。常用别名(`m.content` -> `m.summary`、`m.conf` -> `m.confidence`)可透明解析。Decision Alternatives (v4.2.0)
决策备选方案(v4.2.0)
Track rejected alternatives on decision memories to prevent revisiting settled conclusions:
python
from scripts import remember, get_alternatives在决策型记忆中记录被否决的备选方案,避免重复讨论已确定的结论:
python
from scripts import remember, get_alternativesStore decision with alternatives considered
存储带有备选方案的决策
id = remember(
"Chose PostgreSQL for the database",
"decision",
tags=["architecture", "database"],
alternatives=[
{"option": "MongoDB", "rejected": "Schema-less adds complexity for our relational data"},
{"option": "SQLite", "rejected": "Doesn't support concurrent writes at our scale"},
]
)
id = remember(
"选择PostgreSQL作为数据库",
"decision",
tags=["architecture", "database"],
alternatives=[
{"option": "MongoDB", "rejected": "无架构设计会增加我们关系型数据的复杂度"},
{"option": "SQLite", "rejected": "无法支持我们规模下的并发写入"},
]
)
Later: retrieve what was considered
后续:检索曾考虑的备选方案
alts = get_alternatives(id)
for alt in alts:
print(f"Rejected {alt['option']}: {alt.get('rejected', 'no reason')}")
Alternatives are stored in the `refs` field as a typed object alongside memory ID references. The `alternatives` computed field is automatically extracted on `MemoryResult` objects for decision memories.alts = get_alternatives(id)
for alt in alts:
print(f"否决方案 {alt['option']}:{alt.get('rejected', '无原因')}")
备选方案存储在`refs`字段中,作为带类型的对象与内存ID引用并存。对于决策型记忆,`alternatives`计算字段会自动从`MemoryResult`对象中提取。Reference Chain Traversal (v4.3.0)
引用链遍历(v4.3.0)
Follow reference chains to build context graphs around a memory:
python
from scripts import get_chain通过引用链构建记忆的上下文图谱:
python
from scripts import get_chainFollow refs up to 3 levels deep (default)
遍历引用链,最深3层(默认)
chain = get_chain("memory-uuid", depth=3)
for m in chain:
print(f"[depth={m['_chain_depth']}] {m['summary'][:80]}")
chain = get_chain("memory-uuid", depth=3)
for m in chain:
print(f"[深度={m['_chain_depth']}] {m['summary'][:80]}")
Useful for understanding supersede chains, consolidated memory origins, etc.
适用于理解替代链、合并记忆的来源等场景
Handles cycles via visited set. Max depth capped at 10.
通过已访问集合处理循环引用,最大深度限制为10层。Batch Operations (v4.5.0)
批量操作(v4.5.0)
Execute multiple memory operations in a single HTTP round-trip, reducing tool call overhead:
python
from scripts import recall_batch, remember_batch在单次HTTP往返中执行多个内存操作,减少工具调用开销:
python
from scripts import recall_batch, remember_batchMultiple searches in one call (uses server-side FTS5 with BM25 ranking)
单次调用执行多个搜索(使用服务端FTS5和BM25排序)
results = recall_batch(["architecture", "turso", "FTS5"], n=5)
for i, result_set in enumerate(results):
print(f"Query {i}: {len(result_set)} results")
results = recall_batch(["architecture", "turso", "FTS5"], n=5)
for i, result_set in enumerate(results):
print(f"查询 {i}:{len(result_set)} 条结果")
Multiple stores in one call
单次调用执行多个存储
ids = remember_batch([
{"what": "User prefers dark mode", "type": "decision", "tags": ["ui"]},
{"what": "Project uses React", "type": "world", "tags": ["tech"]},
{"what": "Found auth bug", "type": "anomaly", "conf": 0.7},
])
`recall_batch()` uses server-side FTS5 with composite scoring (BM25 × recency × priority). Falls back to sequential `recall()` calls if server FTS5 is unavailable. `remember_batch()` validates each item independently — per-item errors return `{"error": str}` without blocking other items.ids = remember_batch([
{"what": "用户偏好深色模式", "type": "decision", "tags": ["ui"]},
{"what": "项目使用React", "type": "world", "tags": ["tech"]},
{"what": "发现认证BUG", "type": "anomaly", "conf": 0.7},
])
`recall_batch()`使用服务端FTS5和综合评分(BM25 × 时效性 × 优先级)。若服务端FTS5不可用,则回退为顺序调用`recall()`。`remember_batch()`会独立验证每个条目——单个条目的错误会返回`{"error": str}`,不会阻塞其他条目。Forget and Supersede
遗忘与替代
python
from scripts import forget, supersedepython
from scripts import forget, supersedeSoft delete (sets deleted_at, excluded from queries)
软删除(设置deleted_at,查询时会被排除)
forget("memory-uuid")
forget("memory-uuid")
Version without losing history
版本更新,不丢失历史记录
supersede(original_id, "User now prefers Python 3.12", "decision", conf=0.9)
undefinedsupersede(original_id, "用户现在偏好Python 3.12", "decision", conf=0.9)
undefinedConfig Table
Config表
Key-value store for profile (behavioral), ops (operational), and journal (temporal) settings.
python
from scripts import config_get, config_set, config_delete, config_list, profile, ops用于存储配置文件(行为)、操作(运行)和日志(时间)设置的键值存储。
python
from scripts import config_get, config_set, config_delete, config_list, profile, opsRead
读取
config_get("identity") # Single key
profile() # All profile entries
ops() # All ops entries
config_list() # Everything
config_get("identity") # 单个键
profile() # 所有配置文件条目
ops() # 所有操作条目
config_list() # 所有条目
Write
写入
config_set("new-key", "value", "profile") # Category: 'profile', 'ops', or 'journal'
config_set("bio", "Short bio here", "profile", char_limit=500) # Enforce max length
config_set("core-rule", "Never modify this", "ops", read_only=True) # Mark immutable
config_set("new-key", "value", "profile") # 分类:'profile'、'ops'或'journal'
config_set("bio", "简短自我介绍", "profile", char_limit=500) # 强制限制最大长度
config_set("core-rule", "绝不能修改此项", "ops", read_only=True) # 标记为不可修改
Delete
删除
config_delete("old-key")
For progressive disclosure, priority-based ordering, and dynamic topic categories, see [references/advanced-operations.md](references/advanced-operations.md).config_delete("old-key")
如需了解渐进式披露、基于优先级的排序和动态主题分类,请查看 [references/advanced-operations.md](references/advanced-operations.md)。Journal System
日志系统
Temporal awareness via rolling journal entries in config.
python
from scripts import journal, journal_recent, journal_prune通过Config表中的滚动日志条目实现时间感知。
python
from scripts import journal, journal_recent, journal_pruneRecord what happened this interaction
记录本次交互的内容
journal(
topics=["project-x", "debugging"],
user_stated="Will review PR tomorrow",
my_intent="Investigating memory leak"
)
journal(
topics=["project-x", "debugging"],
user_stated="明天会审核PR",
my_intent="调查内存泄漏问题"
)
Boot: load recent entries for context
启动时:加载近期条目作为上下文
for entry in journal_recent(10):
print(f"[{entry['t'][:10]}] {entry.get('topics', [])}: {entry.get('my_intent', '')}")
for entry in journal_recent(10):
print(f"[{entry['t'][:10]}] {entry.get('topics', [])}: {entry.get('my_intent', '')}")
Maintenance: keep last 40 entries
维护:保留最近40条记录
pruned = journal_prune(keep=40)
undefinedpruned = journal_prune(keep=40)
undefinedBackground Writes
后台写入
Use for background writes. Always call before conversation ends to ensure persistence.
remember(..., sync=False)flush()python
from scripts import remember, flush
remember("Derived insight", "experience", sync=False)
remember("Another note", "world", sync=False)使用进行后台写入。对话结束前务必调用以确保数据持久化。
remember(..., sync=False)flush()python
from scripts import remember, flush
remember("衍生见解", "experience", sync=False)
remember("另一条笔记", "world", sync=False)Before conversation ends:
对话结束前:
flush() # Blocks until all background writes finish
`remember_bg()` still works as deprecated alias for `remember(..., sync=False)`.flush() # 阻塞直到所有后台写入完成
`remember_bg()`仍可作为`remember(..., sync=False)`的已弃用别名使用。Memory Quality Guidelines
内存质量指南
Write complete, searchable summaries that standalone without conversation context:
- Good: "User prefers direct answers with code examples over lengthy conceptual explanations"
- Bad: "User wants code" (lacks context, unsearchable)
- Bad: "User asked question" + "gave code" + "seemed happy" (fragmented)
编写完整、可搜索的摘要,无需依赖对话上下文即可独立理解:
- 良好示例:"用户偏好带有代码示例的直接回答,而非冗长的概念性解释"
- 不良示例:"用户想要代码"(缺乏上下文,无法搜索)
- 不良示例:"用户提问" + "提供代码" + "看起来满意"(碎片化信息)
Edge Cases
边缘情况
- Empty recall results: Returns , not an error
MemoryResultList([]) - Tag partial matching: matches memories with tags
tags=["task"]["task", "urgent"] - Confidence defaults: type defaults to 0.8 if not specified
decision - Invalid type: Raises with list of valid types
ValueError - Tag mode: requires all tags present;
tag_mode="all"(default) matches anytag_mode="any" - Query expansion: When FTS5 returns fewer than results (default 3), tags from partial matches find related memories. Set
expansion_thresholdto disable.expansion_threshold=0
- 空回忆结果:返回,而非错误
MemoryResultList([]) - 标签部分匹配:会匹配带有
tags=["task"]标签的记忆["task", "urgent"] - 置信度默认值:若未指定,类型默认置信度为0.8
decision - 无效类型:抛出并列出有效类型
ValueError - 标签模式:要求包含所有标签;
tag_mode="all"(默认)匹配任一标签tag_mode="any" - 查询扩展:当FTS5返回结果少于(默认3条)时,会通过部分匹配的标签查找相关记忆。设置
expansion_threshold可禁用此功能。expansion_threshold=0
Implementation Notes
实现说明
- Backend: Turso SQLite HTTP API (all queries go directly to Turso)
- Credential auto-detection (v3.8.0): Scans env vars, then ,
/mnt/project/turso.env,/mnt/project/muninn.env~/.muninn/.env - FTS5 search: Server-side FTS5 with Porter stemmer tokenizer, BM25 x recency x priority composite scoring
- Retry with exponential backoff for transient errors (503, 429, SSL)
- Thread-safe for background writes
- Repo defaults fallback: used when Turso is unavailable
scripts/defaults/
- 后端:Turso SQLite HTTP API(所有查询直接发送至Turso)
- 凭证自动检测(v3.8.0):先扫描环境变量,再依次检查、
/mnt/project/turso.env、/mnt/project/muninn.env~/.muninn/.env - FTS5搜索:服务端FTS5,使用Porter词干分析器、BM25 × 时效性 × 优先级综合评分
- 针对瞬时错误(503、429、SSL)使用指数退避重试机制
- 后台写入线程安全
- 仓库默认回退:当Turso不可用时,使用中的默认值
scripts/defaults/
Session Continuity (v4.3.0)
会话连续性(v4.3.0)
Save and resume session state for cross-session persistence:
python
from scripts import session_save, session_resume, sessions保存和恢复会话状态,实现跨会话持久化:
python
from scripts import session_save, session_resume, sessionsSave a checkpoint before ending session
会话结束前保存检查点
session_save("Implementing FTS5 search", context={"files": ["cache.py"], "status": "in-progress"})
session_save("实现FTS5搜索", context={"files": ["cache.py"], "status": "进行中"})
In a new session: resume from last checkpoint
在新会话中:从上次检查点恢复
checkpoint = session_resume("previous-session-id")
print(checkpoint['summary']) # What was happening
print(checkpoint['context']) # Custom context data
print(len(checkpoint['recent_memories'])) # Recent memories from that session
checkpoint = session_resume("previous-session-id")
print(checkpoint['summary']) # 当时的工作内容
print(checkpoint['context']) # 自定义上下文数据
print(len(checkpoint['recent_memories'])) # 该会话的近期记忆
List available session checkpoints
列出可用的会话检查点
for s in sessions():
print(f"{s['session_id']}: {s['summary'][:60]}")
undefinedfor s in sessions():
print(f"{s['session_id']}: {s['summary'][:60]}")
undefinedMemory Consolidation (v4.2.0)
内存合并(v4.2.0)
Automatically cluster related memories and synthesize summaries, reducing retrieval noise while preserving traceability:
python
from scripts import consolidate自动聚类相关记忆并生成摘要,减少检索噪音同时保留可追溯性:
python
from scripts import consolidatePreview what would be consolidated
预览会被合并的内容
result = consolidate(dry_run=True)
for c in result['clusters']:
print(f"Tag '{c['tag']}': {c['count']} memories")
result = consolidate(dry_run=True)
for c in result['clusters']:
print(f"标签 '{c['tag']}': {c['count']} 条记忆")
Actually consolidate (creates summaries, demotes originals to background)
执行合并(生成摘要,将原始记忆降级为后台存储)
result = consolidate(dry_run=False, min_cluster=3)
print(f"Consolidated {result['consolidated']} clusters, demoted {result['demoted']} memories")
result = consolidate(dry_run=False, min_cluster=3)
print(f"合并了 {result['consolidated']} 个聚类,降级了 {result['demoted']} 条记忆")
Scope to specific tags
针对特定标签进行合并
result = consolidate(tags=["debugging"], dry_run=False)
How it works:
1. **Clustering**: Groups memories by shared tags (minimum `min_cluster` memories per group)
2. **Synthesis**: Creates a `type=world` summary memory tagged `consolidated` containing all originals
3. **Archival**: Demotes original memories to `priority=-1` (background)
4. **Traceability**: Summary's `refs` field lists all original memory IDsresult = consolidate(tags=["debugging"], dry_run=False)
工作原理:
1. **聚类**:按共享标签将记忆分组(每组至少包含`min_cluster`条记忆)
2. **合成**:创建一个标记为`consolidated`的`type=world`类型摘要记忆,包含所有原始记忆
3. **归档**:将原始记忆降级为`priority=-1`(后台存储)
4. **可追溯性**:摘要的`refs`字段列出所有原始记忆的IDCross-Episodic Reflection (v4.4.0)
跨场景反思(v4.4.0)
Phase 1.5 of the therapy workflow: systematically convert clusters of similar experiences into generalized semantic knowledge.
python
from scripts import therapy_reflect治疗工作流的1.5阶段:系统地将相似经历的聚类转换为通用语义知识。
python
from scripts import therapy_reflectPreview discovered patterns without creating memories
预览发现的模式,不创建新记忆
result = therapy_reflect(dry_run=True)
for c in result['clusters']:
print(f"Pattern ({len(c['source_ids'])} episodes): {c['pattern'][:80]}")
print(f" Common tags: {c['tags']}")
result = therapy_reflect(dry_run=True)
for c in result['clusters']:
print(f"模式({len(c['source_ids'])} 个场景):{c['pattern'][:80]}")
print(f" 通用标签:{c['tags']}")
Create semantic memories from patterns
基于模式创建语义记忆
result = therapy_reflect(dry_run=False)
print(f"Created {result['created']} pattern memories from {len(result['clusters'])} clusters")
How it works:
1. **Sampling**: Retrieves recent episodic memories (`type=experience`)
2. **Similarity search**: For each experience, finds similar past episodes via `recall()`
3. **Clustering**: Groups 3+ similar experiences into pattern clusters
4. **Extraction**: Creates `type=world` semantic memories tagged `reflection` + `cross-episodic`
5. **Traceability**: Each pattern memory's `refs` field lists all source episode IDs
Integrates into the existing therapy workflow between pruning and synthesis phases.result = therapy_reflect(dry_run=False)
print(f"从 {len(result['clusters'])} 个聚类中创建了 {result['created']} 条模式记忆")
工作原理:
1. **采样**:检索近期的场景型记忆(`type=experience`)
2. **相似性搜索**:对每个经历,通过`recall()`查找相似的过往场景
3. **聚类**:将3个及以上相似经历分组为模式聚类
4. **提取**:创建标记为`reflection` + `cross-episodic`的`type=world`类型语义记忆
5. **可追溯性**:每条模式记忆的`refs`字段列出所有源场景的ID
集成到现有治疗工作流中,位于清理和合成阶段之间。Advanced Topics
高级主题
For architecture details, see _ARCH.md.
See references/advanced-operations.md for:
- Date-filtered queries (,
recall_since,recall_between/sinceparameters)until - Priority system and memory consolidation (,
strengthen)weaken - Therapy helpers, cross-episodic reflection, and analysis helpers
- Handoff convention (cross-environment coordination)
- Session scoping and continuity (,
session_save,session_resume)sessions - Retrieval observability and retention management
- Export/import for portability
- Type-safe results (MemoryResult) details
- Proactive memory hints ()
recall_hints - GitHub access detection and unified API
- Progressive disclosure and priority-based ordering
- Decision alternatives () and memory consolidation (
get_alternatives)consolidate - Reference chain traversal ()
get_chain - Batch APIs (,
recall_batch) for reducing HTTP round-tripsremember_batch
如需了解架构细节,请查看 _ARCH.md。
如需了解以下内容,请查看 references/advanced-operations.md:
- 日期过滤查询(、
recall_since、recall_between/since参数)until - 优先级系统与内存合并(、
strengthen)weaken - 治疗辅助工具、跨场景反思和分析辅助工具
- 交接约定(跨环境协调)
- 会话作用域与连续性(、
session_save、session_resume)sessions - 检索可观测性与保留管理
- 导出/导入以实现可移植性
- 类型安全结果(MemoryResult)详情
- 主动内存提示()
recall_hints - GitHub访问检测与统一API
- 渐进式披露与基于优先级的排序
- 决策备选方案()与内存合并(
get_alternatives)consolidate - 引用链遍历()
get_chain - 批量API(、
recall_batch)以减少HTTP往返次数remember_batch