session-learning

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

Session Learning Skill

Session Learning 技能

Purpose

用途

This skill provides cross-session learning by:
  1. Extracting learnings from session transcripts at Stop hook
  2. Storing learnings in structured YAML format (
    ~/.amplihack/.claude/data/learnings/
    )
  3. Injecting relevant past learnings at SessionStart based on task similarity
  4. Managing learnings via
    /amplihack:learnings
    capability
本技能通过以下方式实现跨会话学习:
  1. 提取:在Stop hook阶段从会话记录中提取学习内容
  2. 存储:将学习内容以结构化YAML格式存储在
    ~/.amplihack/.claude/data/learnings/
    目录下
  3. 注入:在SessionStart阶段根据任务相似度注入相关的过往学习内容
  4. 管理:通过
    /amplihack:learnings
    功能管理学习内容

Design Philosophy

设计理念

Ruthlessly Simple Approach:
  • One YAML file per learning category (not per session)
  • Simple keyword matching for relevance (no complex ML)
  • Complements existing DISCOVERIES.md/PATTERNS.md - doesn't replace them
  • Fail-safe: Never blocks session start or stop
极致简洁的设计思路:
  • 每个学习类别对应一个YAML文件(而非按会话存储)
  • 采用简单的关键词匹配判断相关性(无需复杂机器学习)
  • 作为现有DISCOVERIES.md/PATTERNS.md的补充,而非替代
  • 故障安全设计:绝不会阻塞会话的启动或结束

Learning Categories

学习类别

Learnings are stored in five categories:
CategoryFilePurpose
errors
errors.yaml
Error patterns and their solutions
workflows
workflows.yaml
Workflow insights and shortcuts
tools
tools.yaml
Tool usage patterns and gotchas
architecture
architecture.yaml
Design decisions and trade-offs
debugging
debugging.yaml
Debugging strategies and root causes
学习内容分为五大类别:
类别文件用途
errors
errors.yaml
错误模式及其解决方案
workflows
workflows.yaml
工作流见解与快捷方式
tools
tools.yaml
工具使用模式与注意事项
architecture
architecture.yaml
设计决策与权衡
debugging
debugging.yaml
调试策略与根本原因分析

YAML Schema

YAML schema

Each learning file follows this structure:
yaml
undefined
每个学习文件遵循以下结构:
yaml
undefined

.claude/data/learnings/errors.yaml

.claude/data/learnings/errors.yaml

category: errors last_updated: "2025-11-25T12:00:00Z" learnings:
  • id: "err-001" created: "2025-11-25T12:00:00Z" keywords:
    • "import"
    • "module not found"
    • "circular dependency" summary: "Circular imports cause 'module not found' errors" insight: | When module A imports from module B and module B imports from module A, Python raises ImportError. Solution: Move shared code to a third module or use lazy imports. example: |

    Bad: circular import

    utils.py imports from models.py

    models.py imports from utils.py

    Good: extract shared code

    shared.py has common functions

    both utils.py and models.py import from shared.py

    confidence: 0.9 times_used: 3
undefined
category: errors last_updated: "2025-11-25T12:00:00Z" learnings:
  • id: "err-001" created: "2025-11-25T12:00:00Z" keywords:
    • "import"
    • "module not found"
    • "circular dependency" summary: "Circular imports cause 'module not found' errors" insight: | When module A imports from module B and module B imports from module A, Python raises ImportError. Solution: Move shared code to a third module or use lazy imports. example: |

    Bad: circular import

    utils.py imports from models.py

    models.py imports from utils.py

    Good: extract shared code

    shared.py has common functions

    both utils.py and models.py import from shared.py

    confidence: 0.9 times_used: 3
undefined

When to Use This Skill

适用场景

Automatic Usage (via hooks):
  • At session stop: Extracts learnings from transcript
  • At session start: Injects relevant learnings based on prompt keywords
Manual Usage:
  • When you want to view/manage learnings
  • When debugging and want to recall past solutions
  • When onboarding to understand project-specific patterns
自动触发(通过Hook):
  • 会话结束时:从记录中提取学习内容
  • 会话启动时:根据提示关键词注入相关学习内容
手动触发:
  • 当你需要查看/管理学习内容时
  • 调试时需要回忆过往解决方案时
  • 入职时需要了解项目特定模式时

Learning Extraction Process

学习内容提取流程

Step 1: Analyze Session Transcript

步骤1:分析会话记录

At session stop, scan for:
  1. Error patterns: Errors encountered and how they were solved
  2. Workflow insights: Steps that worked well or poorly
  3. Tool discoveries: New ways of using tools effectively
  4. Architecture decisions: Design choices and their rationale
  5. Debugging strategies: Root cause analysis patterns
会话结束时,扫描以下内容:
  1. 错误模式:遇到的错误及其解决方法
  2. 工作流见解:效果良好或不佳的步骤
  3. 工具发现:有效使用工具的新方法
  4. 架构决策:设计选择及其理由
  5. 调试策略:根本原因分析模式

Step 2: Extract Structured Learning

步骤2:提取结构化学习内容

For each significant insight:
  1. Generate unique ID based on category and timestamp
  2. Extract keywords from context (3-5 relevant terms)
  3. Create one-sentence summary
  4. Write detailed insight with explanation
  5. Include code example if applicable
  6. Assign confidence score (0.5-1.0)
对于每个重要见解:
  1. 根据类别和时间戳生成唯一ID
  2. 从上下文提取关键词(3-5个相关术语)
  3. 生成一句话摘要
  4. 撰写包含解释的详细见解
  5. 如有需要,添加代码示例
  6. 分配置信度分数(0.5-1.0)

Step 3: Merge with Existing Learnings

步骤3:与现有学习内容合并

  1. Check for duplicate learnings using keyword overlap
  2. If similar learning exists (>60% keyword match), update confidence
  3. Otherwise, append new learning to category file
  1. 通过关键词重叠检查重复学习内容
  2. 若存在相似学习内容(关键词匹配度>60%),更新置信度
  3. 否则,将新学习内容追加到对应类别的文件中

Learning Injection Process

学习内容注入流程

Step 1: Extract Task Keywords

步骤1:提取任务关键词

From session start prompt, extract:
  • Technical terms (languages, frameworks, tools)
  • Problem indicators (error, fix, debug, implement)
  • Domain keywords (api, database, auth, etc.)
从会话启动提示中提取:
  • 技术术语(语言、框架、工具)
  • 问题标识(error、fix、debug、implement)
  • 领域关键词(api、database、auth等)

Step 2: Find Relevant Learnings

步骤2:查找相关学习内容

For each learning category:
  1. Load learnings from YAML
  2. Calculate keyword overlap with task
  3. Rank by
    overlap_score * confidence * recency_weight
  4. Select top 3 most relevant learnings
针对每个学习类别:
  1. 加载YAML中的学习内容
  2. 计算与任务的关键词重叠度
  3. overlap_score * confidence * recency_weight
    排序
  4. 选择相关性最高的3条学习内容

Step 3: Inject Context

步骤3:注入上下文

Format relevant learnings as context:
markdown
undefined
将相关学习内容格式化为上下文:
markdown
undefined

Past Learnings Relevant to This Task

与当前任务相关的过往学习内容

[Category]: [Summary]

[类别]: [摘要]

[Insight with example if helpful]

[见解及实用示例]

undefined
undefined

Usage Examples

使用示例

Example 1: Automatic Extraction

示例1:自动提取

Session: Debugging circular import issue in Neo4j module
Duration: 45 minutes
Resolution: Moved shared types to separate file

Extracted Learning:
- Category: errors
- Keywords: [import, circular, neo4j, type]
- Summary: Circular imports in Neo4j types cause ImportError
- Insight: When Neo4jNode imports from connection.py which imports
  Node types, move types to separate types.py module
- Example: types.py with dataclasses, connection.py imports from types.py
会话:调试Neo4j模块中的循环导入问题
时长:45分钟
解决方案:将共享类型移至单独文件

提取的学习内容:
- 类别:errors
- 关键词:[import, circular, neo4j, type]
- 摘要:Neo4j类型中的循环导入会导致ImportError
- 见解:当Neo4jNode从connection.py导入,而connection.py又导入Node类型时,需将类型移至单独的types.py模块
- 示例:包含数据类的types.py,connection.py从types.py导入

Example 2: Automatic Injection

示例2:自动注入

Session Start Prompt: "Fix the import error in the memory module"

Matched Learnings:
1. errors/err-001: "Circular imports cause 'module not found' errors" (85% match)
2. debugging/dbg-003: "Use `python -c` to isolate import issues" (60% match)

Injected Context:
会话启动提示:"修复内存模块中的导入错误"

匹配的学习内容:
1. errors/err-001: "Circular imports cause 'module not found' errors" (匹配度85%)
2. debugging/dbg-003: "Use `python -c` to isolate import issues" (匹配度60%)

注入的上下文:

Past Learnings Relevant to This Task

与当前任务相关的过往学习内容

Errors: Circular imports cause 'module not found' errors

Errors: Circular imports cause 'module not found' errors

When module A imports from module B and B imports from A, Python raises ImportError. Solution: Move shared code to a third module or use lazy imports.

undefined

当模块A从模块B导入,而模块B又从模块A导入时,Python会抛出ImportError。解决方案:将共享代码移至第三个模块或使用延迟导入。

undefined

Example 3: Manual Management

示例3:手动管理

User: Show me what I've learned about testing

Claude (using this skill):
1. Reads .claude/data/learnings/workflows.yaml
2. Filters learnings with keywords containing "test"
3. Displays formatted list with summaries and examples
用户:展示我在测试方面的学习内容

Claude(使用本技能):
1. 读取.claude/data/learnings/workflows.yaml
2. 过滤包含"test"关键词的学习内容
3. 以格式化列表展示摘要和示例

Keyword Matching Algorithm

关键词匹配算法

Simple but effective matching:
python
def calculate_relevance(task_keywords: set, learning_keywords: set) -> float:
    """Calculate relevance score between 0 and 1."""
    if not task_keywords or not learning_keywords:
        return 0.0

    # Count overlapping keywords
    overlap = task_keywords & learning_keywords

    # Score: overlap / min(task, learning) to not penalize short queries
    return len(overlap) / min(len(task_keywords), len(learning_keywords))
简单但高效的匹配算法:
python
def calculate_relevance(task_keywords: set, learning_keywords: set) -> float:
    """Calculate relevance score between 0 and 1."""
    if not task_keywords or not learning_keywords:
        return 0.0

    # Count overlapping keywords
    overlap = task_keywords & learning_keywords

    # Score: overlap / min(task, learning) to not penalize short queries
    return len(overlap) / min(len(task_keywords), len(learning_keywords))

Integration Points

集成点

With Stop Hook

与Stop Hook集成

The stop hook can call this skill to extract learnings:
  1. Parse transcript for significant events
  2. Identify error patterns, solutions, insights
  3. Store in appropriate category YAML
  4. Log extraction summary
Stop Hook可调用本技能提取学习内容:
  1. 解析会话记录中的重要事件
  2. 识别错误模式、解决方案和见解
  3. 存储到对应类别的YAML文件中
  4. 记录提取摘要

With Session Start Hook

与Session Start Hook集成

The session start hook can inject relevant learnings:
  1. Parse initial prompt for keywords
  2. Find matching learnings across categories
  3. Format as context injection
  4. Include in session context
Session Start Hook可注入相关学习内容:
  1. 解析初始提示中的关键词
  2. 在所有类别中查找匹配的学习内容
  3. 格式化为上下文注入内容
  4. 加入会话上下文

With /amplihack:learnings Command

与/amplihack:learnings命令集成

Command interface for learning management:
  • /amplihack:learnings show [category]
    - Display learnings
  • /amplihack:learnings search <query>
    - Search across all categories
  • /amplihack:learnings add
    - Manually add a learning
  • /amplihack:learnings stats
    - Show learning statistics
用于学习内容管理的命令接口:
  • /amplihack:learnings show [category]
    - 展示学习内容
  • /amplihack:learnings search <query>
    - 跨所有类别搜索
  • /amplihack:learnings add
    - 手动添加学习内容
  • /amplihack:learnings stats
    - 展示学习内容统计数据

Quality Guidelines

质量指南

When to Extract

何时提取

Extract a learning when:
  • Solving a problem that took >10 minutes
  • Discovering non-obvious tool behavior
  • Finding a pattern that applies broadly
  • Making an architecture decision with trade-offs
在以下情况提取学习内容:
  • 解决耗时超过10分钟的问题
  • 发现非显而易见的工具行为
  • 找到具有广泛适用性的模式
  • 做出带有权衡的架构决策

When NOT to Extract

何时不提取

Skip extraction when:
  • Issue was trivial typo or syntax error
  • Solution is already in DISCOVERIES.md or PATTERNS.md
  • Insight is too project-specific to reuse
  • Confidence is low (<0.5)
在以下情况跳过提取:
  • 问题是微不足道的拼写或语法错误
  • 解决方案已存在于DISCOVERIES.md或PATTERNS.md中
  • 见解过于项目特定,无法复用
  • 置信度较低(<0.5)

Learning Quality Checklist

学习内容质量检查表

  • Keywords are specific and searchable
  • Summary is one clear sentence
  • Insight explains WHY, not just WHAT
  • Example is minimal and runnable
  • Confidence reflects actual certainty
  • 关键词具体且可搜索
  • 摘要为清晰的一句话
  • 见解解释了原因,而非仅描述现象
  • 示例简洁且可运行
  • 置信度反映实际确定性

File Locations

文件位置

.claude/
  data/
    learnings/
      errors.yaml        # Error patterns and solutions
      workflows.yaml     # Workflow insights
      tools.yaml         # Tool usage patterns
      architecture.yaml  # Design decisions
      debugging.yaml     # Debugging strategies
      _stats.yaml        # Usage statistics (auto-generated)
.claude/
  data/
    learnings/
      errors.yaml        # 错误模式与解决方案
      workflows.yaml     # 工作流见解
      tools.yaml         # 工具使用模式
      architecture.yaml  # 设计决策
      debugging.yaml     # 调试策略
      _stats.yaml        # 使用统计数据(自动生成)

Comparison with Existing Systems

与现有系统的对比

FeatureDISCOVERIES.mdPATTERNS.mdSession Learning
FormatMarkdownMarkdownYAML
AudienceHumansHumansAgents + Humans
StorageSingle fileSingle filePer-category files
MatchingManual readManual readKeyword-based auto
InjectionManualManualAutomatic
ScopeMajor discoveriesProven patternsAny useful insight
Complementary Use:
  • Use DISCOVERIES.md for major, well-documented discoveries
  • Use PATTERNS.md for proven, reusable patterns with code
  • Use Session Learning for quick insights that help future sessions
特性DISCOVERIES.mdPATTERNS.mdSession Learning
格式MarkdownMarkdownYAML
受众人类人类智能体+人类
存储单个文件单个文件按类别分文件存储
匹配手动阅读手动阅读基于关键词自动匹配
注入手动手动自动
范围重大发现已验证的模式所有实用见解
互补使用建议:
  • 使用DISCOVERIES.md记录重大、文档完善的发现
  • 使用PATTERNS.md记录已验证、可复用的带代码模式
  • 使用Session Learning记录可帮助未来会话的快速见解

Error Handling

错误处理

YAML Parsing Errors

YAML解析错误

If a learning file becomes corrupted or invalid:
python
import yaml
from pathlib import Path

def safe_load_learnings(filepath: Path) -> dict:
    """Load learnings with graceful error handling."""
    try:
        content = filepath.read_text()
        data = yaml.safe_load(content)
        if not isinstance(data, dict) or "learnings" not in data:
            print(f"Warning: Invalid structure in {filepath}, using empty learnings")
            return {"category": filepath.stem, "learnings": []}
        return data
    except yaml.YAMLError as e:
        print(f"Warning: YAML error in {filepath}: {e}")
        # Create backup before recovery
        backup = filepath.with_suffix(".yaml.bak")
        filepath.rename(backup)
        print(f"Backed up corrupted file to {backup}")
        return {"category": filepath.stem, "learnings": []}
    except Exception as e:
        print(f"Warning: Could not read {filepath}: {e}")
        return {"category": filepath.stem, "learnings": []}
若学习文件损坏或无效:
python
import yaml
from pathlib import Path

def safe_load_learnings(filepath: Path) -> dict:
    """Load learnings with graceful error handling."""
    try:
        content = filepath.read_text()
        data = yaml.safe_load(content)
        if not isinstance(data, dict) or "learnings" not in data:
            print(f"Warning: Invalid structure in {filepath}, using empty learnings")
            return {"category": filepath.stem, "learnings": []}
        return data
    except yaml.YAMLError as e:
        print(f"Warning: YAML error in {filepath}: {e}")
        # Create backup before recovery
        backup = filepath.with_suffix(".yaml.bak")
        filepath.rename(backup)
        print(f"Backed up corrupted file to {backup}")
        return {"category": filepath.stem, "learnings": []}
    except Exception as e:
        print(f"Warning: Could not read {filepath}: {e}")
        return {"category": filepath.stem, "learnings": []}

Missing Files

文件缺失

If the learnings directory doesn't exist, create it:
python
def ensure_learnings_directory():
    """Create learnings directory and empty files if missing."""
    learnings_dir = Path(".claude/data/learnings")
    learnings_dir.mkdir(parents=True, exist_ok=True)

    categories = ["errors", "workflows", "tools", "architecture", "debugging"]
    for cat in categories:
        filepath = learnings_dir / f"{cat}.yaml"
        if not filepath.exists():
            filepath.write_text(f"category: {cat}\nlearnings: []\n")
若学习内容目录不存在,则创建它:
python
def ensure_learnings_directory():
    """Create learnings directory and empty files if missing."""
    learnings_dir = Path(".claude/data/learnings")
    learnings_dir.mkdir(parents=True, exist_ok=True)

    categories = ["errors", "workflows", "tools", "architecture", "debugging"]
    for cat in categories:
        filepath = learnings_dir / f"{cat}.yaml"
        if not filepath.exists():
            filepath.write_text(f"category: {cat}\nlearnings: []\n")

Fail-Safe Principle

故障安全原则

The learning system follows fail-safe design:
  • Never blocks session start: If injection fails, session continues normally
  • Never blocks session stop: If extraction fails, session ends normally
  • Logs warnings but continues: Errors are logged, not raised
  • Creates backups before modifications: Corrupt files are preserved
学习系统遵循故障安全设计:
  • 绝不阻塞会话启动:若注入失败,会话正常继续
  • 绝不阻塞会话结束:若提取失败,会话正常结束
  • 记录警告但继续运行:仅记录错误,不抛出异常
  • 修改前创建备份:保留损坏的文件

Hook Integration

Hook集成

Stop Hook: Learning Extraction

Stop Hook:学习内容提取

Add learning extraction to your stop hook:
python
undefined
在Stop Hook中添加学习内容提取:
python
undefined

.claude/tools/amplihack/hooks/stop_hook.py

.claude/tools/amplihack/hooks/stop_hook.py

async def extract_session_learnings(transcript: str, session_id: str): """Extract learnings from session transcript at stop.""" from pathlib import Path import yaml from datetime import datetime
# Only extract if session was substantive (not just a quick question)
if len(transcript) < 1000:
    return

# Use Claude to extract insights (simplified example)
extraction_prompt = f"""
Analyze this session transcript and extract any reusable learnings.

Categories:
- errors: Error patterns and solutions
- workflows: Process improvements
- tools: Tool usage insights
- architecture: Design decisions
- debugging: Debug strategies

For each learning, provide:
- category (one of the above)
- keywords (3-5 searchable terms)
- summary (one sentence)
- insight (detailed explanation)
- example (code if applicable)
- confidence (0.5-1.0)

Transcript:
{transcript[:5000]}  # Truncate for token limits
"""

# ... call Claude to extract ...
# ... parse response and add to appropriate YAML files ...
def on_stop(session_data: dict): """Stop hook entry point.""" # ... other stop hook logic ...
# Extract learnings (non-blocking)
try:
    import asyncio
    asyncio.create_task(
        extract_session_learnings(
            session_data.get("transcript", ""),
            session_data.get("session_id", "")
        )
    )
except Exception as e:
    print(f"Learning extraction failed (non-blocking): {e}")
undefined
async def extract_session_learnings(transcript: str, session_id: str): """Extract learnings from session transcript at stop.""" from pathlib import Path import yaml from datetime import datetime
# Only extract if session was substantive (not just a quick question)
if len(transcript) < 1000:
    return

# Use Claude to extract insights (simplified example)
extraction_prompt = f"""
Analyze this session transcript and extract any reusable learnings.

Categories:
- errors: Error patterns and solutions
- workflows: Process improvements
- tools: Tool usage insights
- architecture: Design decisions
- debugging: Debug strategies

For each learning, provide:
- category (one of the above)
- keywords (3-5 searchable terms)
- summary (one sentence)
- insight (detailed explanation)
- example (code if applicable)
- confidence (0.5-1.0)

Transcript:
{transcript[:5000]}  # Truncate for token limits
"""

# ... call Claude to extract ...
# ... parse response and add to appropriate YAML files ...
def on_stop(session_data: dict): """Stop hook entry point.""" # ... other stop hook logic ...
# Extract learnings (non-blocking)
try:
    import asyncio
    asyncio.create_task(
        extract_session_learnings(
            session_data.get("transcript", ""),
            session_data.get("session_id", "")
        )
    )
except Exception as e:
    print(f"Learning extraction failed (non-blocking): {e}")
undefined

Session Start Hook: Learning Injection

Session Start Hook:学习内容注入

Add learning injection to your session start hook:
python
undefined
在Session Start Hook中添加学习内容注入:
python
undefined

.claude/tools/amplihack/hooks/session_start_hook.py

.claude/tools/amplihack/hooks/session_start_hook.py

def inject_relevant_learnings(initial_prompt: str) -> str: """Find and format relevant learnings for injection.""" from pathlib import Path import yaml
learnings_dir = Path(".claude/data/learnings")
if not learnings_dir.exists():
    return ""

# Extract keywords from prompt
prompt_lower = initial_prompt.lower()
task_keywords = set()
for word in prompt_lower.split():
    if len(word) > 3:  # Skip short words
        task_keywords.add(word.strip(".,!?"))

# Find matching learnings
matches = []
for yaml_file in learnings_dir.glob("*.yaml"):
    if yaml_file.name.startswith("_"):
        continue  # Skip _stats.yaml

    try:
        data = yaml.safe_load(yaml_file.read_text())
        for learning in data.get("learnings", []):
            learning_keywords = set(k.lower() for k in learning.get("keywords", []))
            overlap = task_keywords & learning_keywords
            if overlap:
                score = len(overlap) * learning.get("confidence", 0.5)
                matches.append((score, learning))
    except Exception:
        continue

# Return top 3 matches
matches.sort(key=lambda x: x[0], reverse=True)
if not matches:
    return ""

context = "## Past Learnings Relevant to This Task\n\n"
for score, learning in matches[:3]:
    context += f"### {learning.get('summary', 'Insight')}\n"
    context += f"{learning.get('insight', '')}\n\n"

return context
def on_session_start(session_data: dict) -> dict: """Session start hook entry point.""" initial_prompt = session_data.get("prompt", "")
# Inject relevant learnings
try:
    learning_context = inject_relevant_learnings(initial_prompt)
    if learning_context:
        session_data["injected_context"] = learning_context
except Exception as e:
    print(f"Learning injection failed (non-blocking): {e}")

return session_data
undefined
def inject_relevant_learnings(initial_prompt: str) -> str: """Find and format relevant learnings for injection.""" from pathlib import Path import yaml
learnings_dir = Path(".claude/data/learnings")
if not learnings_dir.exists():
    return ""

# Extract keywords from prompt
prompt_lower = initial_prompt.lower()
task_keywords = set()
for word in prompt_lower.split():
    if len(word) > 3:  # Skip short words
        task_keywords.add(word.strip(".,!?"))

# Find matching learnings
matches = []
for yaml_file in learnings_dir.glob("*.yaml"):
    if yaml_file.name.startswith("_"):
        continue  # Skip _stats.yaml

    try:
        data = yaml.safe_load(yaml_file.read_text())
        for learning in data.get("learnings", []):
            learning_keywords = set(k.lower() for k in learning.get("keywords", []))
            overlap = task_keywords & learning_keywords
            if overlap:
                score = len(overlap) * learning.get("confidence", 0.5)
                matches.append((score, learning))
    except Exception:
        continue

# Return top 3 matches
matches.sort(key=lambda x: x[0], reverse=True)
if not matches:
    return ""

context = "## Past Learnings Relevant to This Task\n\n"
for score, learning in matches[:3]:
    context += f"### {learning.get('summary', 'Insight')}\n"
    context += f"{learning.get('insight', '')}\n\n"

return context
def on_session_start(session_data: dict) -> dict: """Session start hook entry point.""" initial_prompt = session_data.get("prompt", "")
# Inject relevant learnings
try:
    learning_context = inject_relevant_learnings(initial_prompt)
    if learning_context:
        session_data["injected_context"] = learning_context
except Exception as e:
    print(f"Learning injection failed (non-blocking): {e}")

return session_data
undefined

Limitations

局限性

  1. Keyword matching is imperfect - May miss relevant learnings or match irrelevant ones
  2. No semantic understanding - Can't match conceptually similar but differently-worded insights
  3. Storage is local - Learnings don't sync across machines
  4. Manual cleanup needed - Old/wrong learnings should be periodically reviewed
  1. 关键词匹配不完美 - 可能遗漏相关学习内容或匹配无关内容
  2. 无语义理解 - 无法匹配概念相似但表述不同的见解
  3. 本地存储 - 学习内容无法跨机器同步
  4. 需手动清理 - 过时/错误的学习内容需定期审核

Future Improvements

未来改进方向

If needed, consider:
  • Embedding-based similarity for better matching
  • Cross-machine sync via git
  • Automatic confidence decay over time
  • Integration with Neo4j for graph-based learning relationships
如有需要,可考虑:
  • 基于嵌入的相似度匹配,提升匹配效果
  • 通过Git实现跨机器同步
  • 随时间自动降低置信度
  • 与Neo4j集成,实现基于图的学习内容关系管理

Success Metrics

成功指标

Track effectiveness:
  • Injection rate: % of sessions with relevant learning injected
  • Usage rate: How often injected learnings help solve problems
  • Growth rate: New learnings per week
  • Quality: User feedback on learning relevance
跟踪有效性:
  • 注入率:注入相关学习内容的会话占比
  • 使用率:注入的学习内容帮助解决问题的频率
  • 增长率:每周新增学习内容数量
  • 质量:用户对学习内容相关性的反馈