ln-650-persistence-performance-auditor

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

Persistence & Performance Auditor (L2 Coordinator)

持久化与性能审计器(L2协调器)

Coordinates 3 specialized audit workers to perform database efficiency, transaction correctness, and runtime performance analysis.
协调3名专业审计工作者开展数据库效率、事务正确性及运行时性能分析。

Purpose & Scope

目标与范围

  • Coordinates 3 audit workers (ln-651, ln-652, ln-653) running in parallel
  • Research current best practices for detected DB, ORM, async framework via MCP tools ONCE
  • Pass shared context to all workers (token-efficient)
  • Aggregate worker results into single consolidated report
  • Create single task in Linear under Epic 0 with all findings
  • Manual invocation by user; not part of Story pipeline
  • Independent from ln-620 (can be run separately or after ln-620)
  • 协调3名审计工作者(ln-651、ln-652、ln-653)并行运行
  • 通过MCP工具一次性研究已检测到的数据库、ORM、异步框架的当前最佳实践
  • 向所有工作者传递共享上下文(高效使用令牌)
  • 将工作者结果汇总为一份整合报告
  • 在Epic 0下的Linear中创建单个任务,包含所有发现
  • 由用户手动调用;不属于Story流水线的一部分
  • 独立于ln-620(可单独运行或在ln-620之后运行)

Workflow

工作流程

  1. Discovery: Load tech_stack.md, package manifests, detect DB/ORM/async framework, auto-discover Team ID
  2. Research: Query MCP tools for DB/ORM/async best practices ONCE
  3. Build Context: Create contextStore with best practices + DB-specific metadata
  4. Delegate: 3 workers in PARALLEL
  5. Aggregate: Collect worker results, calculate scores
  6. Generate Report: Build consolidated report
  7. Create Task: Create Linear task in Epic 0 titled "Persistence & Performance Audit: [YYYY-MM-DD]"
  1. 发现阶段: 加载tech_stack.md、包清单,检测数据库/ORM/异步框架,自动发现团队ID
  2. 研究阶段: 通过MCP工具一次性查询数据库/ORM/异步框架的最佳实践
  3. 构建上下文: 创建包含最佳实践+数据库特定元数据的contextStore
  4. 委派任务: 3名工作者并行执行
  5. 结果汇总: 收集工作者结果,计算评分
  6. 生成报告: 构建整合报告
  7. 创建任务: 在Epic 0下创建Linear任务,标题为“持久化与性能审计:[YYYY-MM-DD]”

Phase 1: Discovery

阶段1:发现

Load project metadata:
  • docs/project/tech_stack.md
    - detect DB, ORM, async framework
  • Package manifests:
    requirements.txt
    ,
    pyproject.toml
    ,
    package.json
    ,
    go.mod
  • Auto-discover Team ID from
    docs/tasks/kanban_board.md
Extract DB-specific metadata:
MetadataSourceExample
Database typetech_stack.md, docker-compose.ymlPostgreSQL 16
ORMimports, requirements.txtSQLAlchemy 2.0
Async frameworkimports, requirements.txtasyncio, FastAPI
Session configgrep
create_async_engine
,
sessionmaker
expire_on_commit=False
Triggers/NOTIFYmigration files
pg_notify('job_events', ...)
Connection poolingengine config
pool_size=10, max_overflow=20
Scan for triggers:
Grep("pg_notify|NOTIFY|CREATE TRIGGER", path="alembic/versions/")
  OR path="migrations/"
→ Store: db_config.triggers = [{table, event, function}]
加载项目元数据:
  • docs/project/tech_stack.md
    - 检测数据库、ORM、异步框架
  • 包清单:
    requirements.txt
    pyproject.toml
    package.json
    go.mod
  • docs/tasks/kanban_board.md
    自动发现团队ID
提取数据库特定元数据:
元数据来源示例
数据库类型tech_stack.md, docker-compose.ymlPostgreSQL 16
ORM导入语句, requirements.txtSQLAlchemy 2.0
异步框架导入语句, requirements.txtasyncio, FastAPI
会话配置检索
create_async_engine
sessionmaker
expire_on_commit=False
触发器/NOTIFY迁移文件
pg_notify('job_events', ...)
连接池引擎配置
pool_size=10, max_overflow=20
扫描触发器:
Grep("pg_notify|NOTIFY|CREATE TRIGGER", path="alembic/versions/")
  OR path="migrations/"
→ Store: db_config.triggers = [{table, event, function}]

Phase 2: Research Best Practices (ONCE)

阶段2:研究最佳实践(一次性)

For each detected technology:
TechnologyResearch Focus
SQLAlchemySession lifecycle, expire_on_commit, bulk operations, eager/lazy loading
PostgreSQLNOTIFY/LISTEN semantics, transaction isolation, batch operations
asyncioto_thread, blocking detection, event loop best practices
FastAPIDependency injection scopes, background tasks, async endpoints
Build contextStore:
json
{
  "tech_stack": {"db": "postgresql", "orm": "sqlalchemy", "async": "asyncio"},
  "best_practices": {"sqlalchemy": {...}, "postgresql": {...}, "asyncio": {...}},
  "db_config": {
    "expire_on_commit": false,
    "triggers": [{"table": "jobs", "event": "UPDATE", "function": "notify_job_events"}],
    "pool_size": 10
  },
  "codebase_root": "/project"
}
针对每种检测到的技术:
技术研究重点
SQLAlchemy会话生命周期、expire_on_commit、批量操作、即时/延迟加载
PostgreSQLNOTIFY/LISTEN语义、事务隔离、批量操作
asyncioto_thread、阻塞检测、事件循环最佳实践
FastAPI依赖注入作用域、后台任务、异步端点
构建contextStore:
json
{
  "tech_stack": {"db": "postgresql", "orm": "sqlalchemy", "async": "asyncio"},
  "best_practices": {"sqlalchemy": {...}, "postgresql": {...}, "asyncio": {...}},
  "db_config": {
    "expire_on_commit": false,
    "triggers": [{"table": "jobs", "event": "UPDATE", "function": "notify_job_events"}],
    "pool_size": 10
  },
  "codebase_root": "/project"
}

Phase 3: Delegate to Workers

阶段3:委派任务给工作者

CRITICAL: All delegations use Task tool with
subagent_type: "general-purpose"
for context isolation.
Prompt template:
Task(description: "Audit via ln-65X",
     prompt: "Execute ln-65X-{worker}-auditor. Read skill from ln-65X-{worker}-auditor/SKILL.md. Context: {contextStore}",
     subagent_type: "general-purpose")
Anti-Patterns:
  • ❌ Direct Skill tool invocation without Task wrapper
  • ❌ Any execution bypassing subagent context isolation
Workers (ALL 3 in PARALLEL):
#WorkerPriorityWhat It Audits
1ln-651-query-efficiency-auditorHIGHRedundant queries, N-UPDATE loops, over-fetching, caching scope
2ln-652-transaction-correctness-auditorHIGHCommit patterns, trigger interaction, transaction scope, rollback
3ln-653-runtime-performance-auditorMEDIUMBlocking IO in async, allocations, sync sleep, string concat
Invocation (3 workers in PARALLEL):
javascript
FOR EACH worker IN [ln-651, ln-652, ln-653]:
  Task(description: "Audit via " + worker,
       prompt: "Execute " + worker + ". Read skill. Context: " + JSON.stringify(contextStore),
       subagent_type: "general-purpose")
关键要求: 所有任务委派均使用Task工具,并设置
subagent_type: "general-purpose"
以实现上下文隔离。
提示模板:
Task(description: "通过ln-65X执行审计",
     prompt: "执行ln-65X-{worker}-auditor。从ln-65X-{worker}-auditor/SKILL.md读取技能。上下文:{contextStore}",
     subagent_type: "general-purpose")
反模式:
  • ❌ 不使用Task包装器直接调用Skill工具
  • ❌ 任何绕过子代理上下文隔离的执行方式
工作者(全部3名并行执行):
序号工作者优先级审计内容
1ln-651-query-efficiency-auditor冗余查询、N次更新循环、过度获取、缓存作用域
2ln-652-transaction-correctness-auditor提交模式、触发器交互、事务作用域、回滚
3ln-653-runtime-performance-auditor异步代码中的阻塞IO、内存分配、同步休眠、字符串拼接
调用方式(3名工作者并行执行):
javascript
FOR EACH worker IN [ln-651, ln-652, ln-653]:
  Task(description: "通过 " + worker + " 执行审计",
       prompt: "执行 " + worker + "。读取技能。上下文:" + JSON.stringify(contextStore),
       subagent_type: "general-purpose")

Phase 4: Aggregate Results

阶段4:结果汇总

Collect results from workers:
json
{
  "category": "Query Efficiency",
  "score": 6,
  "total_issues": 8,
  "findings": [...]
}
Aggregation steps:
  1. Merge findings from all 3 workers
  2. Calculate overall score: average of 3 category scores
  3. Sum severity counts across all workers
  4. Sort findings by severity (CRITICAL → HIGH → MEDIUM → LOW)
收集工作者结果:
json
{
  "category": "Query Efficiency",
  "score": 6,
  "total_issues": 8,
  "findings": [...]
}
汇总步骤:
  1. 合并3名工作者的发现结果
  2. 计算总体评分:3个分类评分的平均值
  3. 汇总所有工作者的问题严重程度计数
  4. 按严重程度对发现结果排序(CRITICAL → HIGH → MEDIUM → LOW)

Output Format

输出格式

markdown
undefined
markdown
undefined

Persistence & Performance Audit Report - [DATE]

持久化与性能审计报告 - [日期]

Executive Summary

执行摘要

[2-3 sentences on overall persistence/performance health]
[2-3句话说明持久化/性能整体健康状况]

Compliance Score

合规评分

CategoryScoreNotes
Query EfficiencyX/10...
Transaction CorrectnessX/10...
Runtime PerformanceX/10...
OverallX/10
分类评分备注
查询效率X/10...
事务正确性X/10...
运行时性能X/10...
总体X/10

Severity Summary

严重程度汇总

SeverityCount
CriticalX
HighX
MediumX
LowX
严重程度数量
CriticalX
HighX
MediumX
LowX

Findings by Category

按分类整理的发现结果

1. Query Efficiency

1. 查询效率

SeverityLocationIssueRecommendationEffort
HIGHjob_processor.py:434Redundant entity fetchPass object not IDS
严重程度位置问题建议工作量
HIGHjob_processor.py:434冗余实体获取传递对象而非IDS

2. Transaction Correctness

2. 事务正确性

SeverityLocationIssueRecommendationEffort
CRITICALjob_processor.py:412Missing intermediate commitsAdd commit at milestonesS
严重程度位置问题建议工作量
CRITICALjob_processor.py:412缺少中间提交在关键节点添加提交S

3. Runtime Performance

3. 运行时性能

SeverityLocationIssueRecommendationEffort
HIGHjob_processor.py:444Blocking read_bytes() in asyncUse aiofiles/to_threadS
严重程度位置问题建议工作量
HIGHjob_processor.py:444异步代码中的阻塞read_bytes()使用aiofiles/to_threadS

Recommended Actions (Priority-Sorted)

建议操作(按优先级排序)

PriorityCategoryLocationIssueRecommendationEffort
CRITICALTransaction...Missing commitsAdd strategic commitsS
HIGHQuery...Redundant fetchPass object not IDS
优先级分类位置问题建议工作量
CRITICAL事务...缺少提交添加策略性提交S
HIGH查询...冗余获取传递对象而非IDS

Sources Consulted

参考来源

  • SQLAlchemy best practices: [URL]
  • PostgreSQL NOTIFY docs: [URL]
  • Python asyncio-dev: [URL]
undefined
  • SQLAlchemy最佳实践:[URL]
  • PostgreSQL NOTIFY文档:[URL]
  • Python asyncio-dev:[URL]
undefined

Phase 5: Create Linear Task

阶段5:创建Linear任务

Create task in Epic 0:
  • Title:
    Persistence & Performance Audit: [YYYY-MM-DD]
  • Description: Full report from Phase 4 (markdown format)
  • Team: Auto-discovered from kanban_board.md
  • Epic: 0 (technical debt / refactoring epic)
  • Labels:
    refactoring
    ,
    performance
    ,
    audit
  • Priority: Based on highest severity findings
在Epic 0下创建任务:
  • 标题:
    持久化与性能审计:[YYYY-MM-DD]
  • 描述:阶段4生成的完整报告(Markdown格式)
  • 团队:从kanban_board.md自动发现的团队
  • Epic:0(技术债务/重构Epic)
  • 标签:
    refactoring
    performance
    audit
  • 优先级:基于最严重的发现结果

Critical Rules

关键规则

  • Single context gathering: Research best practices ONCE, pass contextStore to all workers
  • Parallel execution: All 3 workers run in PARALLEL
  • Trigger discovery: Scan migrations for triggers/NOTIFY before delegating (pass to ln-652)
  • Metadata-only loading: Coordinator loads metadata; workers load full file contents
  • Single task: Create ONE task with all findings; do not create multiple tasks
  • Do not audit: Coordinator orchestrates only; audit logic lives in workers
  • 单次上下文收集: 仅研究一次最佳实践,将contextStore传递给所有工作者
  • 并行执行: 所有3名工作者并行运行
  • 触发器发现: 在委派任务前扫描迁移文件中的触发器/NOTIFY(传递给ln-652)
  • 仅加载元数据: 协调器仅加载元数据;工作者加载完整文件内容
  • 单个任务: 创建一个包含所有发现结果的任务;请勿创建多个任务
  • 不执行审计: 协调器仅负责编排;审计逻辑由工作者实现

Definition of Done

完成标准

  • Tech stack discovered (DB type, ORM, async framework)
  • DB-specific metadata extracted (triggers, session config, pool settings)
  • Best practices researched via MCP tools
  • contextStore built and passed to workers
  • All 3 workers invoked in PARALLEL and completed
  • Results aggregated with severity-sorted findings
  • Compliance score calculated per category + overall
  • Executive Summary included
  • Linear task created in Epic 0 with full report
  • Sources consulted listed with URLs
  • 已发现技术栈(数据库类型、ORM、异步框架)
  • 已提取数据库特定元数据(触发器、会话配置、连接池设置)
  • 已通过MCP工具研究最佳实践
  • 已构建contextStore并传递给工作者
  • 所有3名工作者已并行调用并完成
  • 已汇总结果并按严重程度排序发现结果
  • 已计算每个分类及总体的合规评分
  • 已包含执行摘要
  • 已在Epic 0下创建包含完整报告的Linear任务
  • 已列出参考来源及对应URL

Workers

工作者链接

  • ln-651-query-efficiency-auditor
  • ln-652-transaction-correctness-auditor
  • ln-653-runtime-performance-auditor
  • ln-651-query-efficiency-auditor
  • ln-652-transaction-correctness-auditor
  • ln-653-runtime-performance-auditor

Reference Files

参考文件

  • Tech stack:
    docs/project/tech_stack.md
  • Kanban board:
    docs/tasks/kanban_board.md

Version: 1.0.0 Last Updated: 2026-02-04
  • 技术栈:
    docs/project/tech_stack.md
  • 看板:
    docs/tasks/kanban_board.md

版本: 1.0.0 最后更新: 2026-02-04