ln-230-story-prioritizer
Compare original and translation side by side
🇺🇸
Original
English🇨🇳
Translation
ChineseStory Prioritizer
用户故事优先级排序工具
Evaluate Stories using RICE scoring with market research. Generate consolidated prioritization table for Epic.
结合市场调研,使用RICE评分法评估用户故事(Story),为史诗(Epic)生成合并的优先级表格。
Purpose & Scope
目的与适用范围
- Prioritize Stories AFTER ln-220 creates them
- Research market size and competition per Story
- Calculate RICE score for each Story
- Generate prioritization table (P0/P1/P2/P3)
- Output: docs/market/[epic-slug]/prioritization.md
- 对ln-220创建的用户故事(Story)进行优先级排序
- 针对每个用户故事调研市场规模与竞争情况
- 计算每个用户故事的RICE分数
- 生成优先级表格(P0/P1/P2/P3)
- 输出位置:docs/market/[epic-slug]/prioritization.md
When to Use
使用场景
Use this skill when:
- Stories created by ln-220, need business prioritization
- Planning sprint with limited capacity (which Stories first?)
- Stakeholder review requires data-driven priorities
- Evaluating feature ROI before implementation
Do NOT use when:
- Epic has no Stories yet (run ln-220 first)
- Stories are purely technical (infrastructure, refactoring)
- Prioritization already exists in docs/market/
Who calls this skill:
- User (manual) - after ln-220-story-coordinator
- Future: ln-200-scope-decomposer (optional Phase)
适合使用本技能的场景:
- ln-220已创建用户故事,需要进行业务优先级排序
- 规划迭代(Sprint)但资源有限(先做哪些故事?)
- 利益相关方评审需要数据驱动的优先级结果
- 实施前评估功能的投资回报率(ROI)
不适合使用的场景:
- 史诗(Epic)尚未创建任何用户故事(请先运行ln-220)
- 用户故事为纯技术类(基础设施、代码重构)
- docs/market/目录下已存在优先级排序文档
调用本技能的角色:
- 用户(手动) - 在ln-220-story-coordinator之后调用
- 未来规划: ln-200-scope-decomposer(可选阶段)
Input Parameters
输入参数
| Parameter | Required | Description | Default |
|---|---|---|---|
| epic | Yes | Epic ID or "Epic N" format | - |
| stories | No | Specific Story IDs to prioritize | All in Epic |
| depth | No | Research depth (quick/standard/deep) | "standard" |
depth options:
- - 2-3 min/Story, 1 WebSearch per type
quick - - 5-7 min/Story, 2-3 WebSearches per type
standard - - 8-10 min/Story, comprehensive research
deep
| 参数 | 是否必填 | 描述 | 默认值 |
|---|---|---|---|
| epic | 是 | Epic ID 或 "Epic N" 格式 | - |
| stories | 否 | 需要进行优先级排序的特定Story ID | 该Epic下的所有Story |
| depth | 否 | 调研深度(quick/standard/deep) | "standard" |
depth选项说明:
- - 每个Story耗时2-3分钟,每种类型执行1次WebSearch
quick - - 每个Story耗时5-7分钟,每种类型执行2-3次WebSearch
standard - - 每个Story耗时8-10分钟,进行全面调研
deep
Output Structure
输出结构
docs/market/[epic-slug]/
└── prioritization.md # Consolidated table + RICE details + sourcesTable columns (from user requirements):
| Priority | Customer Problem | Feature | Solution | Rationale | Impact | Market | Sources | Competition |
|---|---|---|---|---|---|---|---|---|
| P0 | User pain point | Story title | Technical approach | Why important | Business impact | $XB | [Link] | Blue 1-3 / Red 4-5 |
docs/market/[epic-slug]/
└── prioritization.md # 合并表格 + RICE详情 + 数据源表格列(来自用户需求):
| 优先级 | 用户痛点 | 功能 | 解决方案 | 理由 | 业务影响 | 市场规模 | 数据源 | 竞争情况 |
|---|---|---|---|---|---|---|---|---|
| P0 | 用户痛点描述 | Story标题 | 技术实现方案 | 重要性说明 | 业务价值 | $XB | [链接] | 蓝海1-3 / 红海4-5 |
Research Tools
调研工具
| Tool | Purpose | Example Query |
|---|---|---|
| WebSearch | Market size, competitors | "[domain] market size 2025" |
| mcp__Ref | Industry reports | "[domain] market analysis report" |
| Linear | Load Stories | list_issues(project=Epic.id) |
| Glob | Check existing | "docs/market/[epic]/*" |
| 工具 | 用途 | 查询示例 |
|---|---|---|
| WebSearch | 市场规模、竞品调研 | "[领域] 2025年市场规模" |
| mcp__Ref | 行业报告查询 | "[领域] 市场分析报告" |
| Linear | 加载用户故事 | list_issues(project=Epic.id) |
| Glob | 检查已有文档 | "docs/market/[epic]/*" |
Workflow
工作流程
Phase 1: Discovery (2 min)
阶段1:发现与验证(2分钟)
Objective: Validate input and prepare context.
Process:
-
Parse Epic input:
- Accept: Epic ID, "Epic N", or Linear Project URL
- Query:
get_project(query=epic) - Extract: Epic ID, title, description
-
Auto-discover configuration:
- Read for Team ID
docs/tasks/kanban_board.md - Slugify Epic title for output path
- Read
-
Check existing prioritization:
Glob: docs/market/[epic-slug]/prioritization.md- If exists: Ask "Update existing or create new?"
- If new: Continue
-
Create output directory:bash
mkdir -p docs/market/[epic-slug]/
Output: Epic metadata, output path, existing check result
目标: 验证输入并准备上下文信息。
流程:
-
解析Epic输入:
- 支持格式:Epic ID、"Epic N" 或 Linear项目URL
- 查询:
get_project(query=epic) - 提取信息:Epic ID、标题、描述
-
自动发现配置:
- 读取获取团队ID
docs/tasks/kanban_board.md - 将Epic标题转换为短标识(slug)用于输出路径
- 读取
-
检查已有优先级文档:
Glob: docs/market/[epic-slug]/prioritization.md- 若已存在:询问“更新现有文档还是创建新文档?”
- 若不存在:继续流程
-
创建输出目录:bash
mkdir -p docs/market/[epic-slug]/
输出: Epic元数据、输出路径、已有文档检查结果
Phase 2: Load Stories Metadata (3 min)
阶段2:加载用户故事元数据(3分钟)
Objective: Build Story queue with metadata only (token efficiency).
Process:
-
Query Stories from Epic:
list_issues(project=Epic.id, label="user-story") -
Extract metadata only:
- Story ID, title, status
- DO NOT load full descriptions yet
-
Filter Stories:
- Exclude: Done, Cancelled, Archived
- Include: Backlog, Todo, In Progress
-
Build processing queue:
- Order by: existing priority (if any), then by ID
- Count: N Stories to process
Output: Story queue (ID + title), ~50 tokens/Story
目标: 仅加载元数据构建用户故事处理队列(提升Token效率)。
流程:
-
从Epic查询用户故事:
list_issues(project=Epic.id, label="user-story") -
仅提取元数据:
- Story ID、标题、状态
- 请勿加载完整描述
-
过滤用户故事:
- 排除:已完成、已取消、已归档
- 包含:待办、计划中、进行中
-
构建处理队列:
- 排序规则:现有优先级(若有),然后按ID排序
- 统计:待处理的用户故事数量
输出: 用户故事队列(ID + 标题),每个故事约50个Token
Phase 3: Story-by-Story Analysis Loop (5-10 min/Story)
阶段3:逐个用户故事分析循环(每个故事5-10分钟)
Objective: For EACH Story: load description, research, score RICE.
Critical: Process Stories ONE BY ONE for token efficiency!
目标: 针对每个用户故事:加载描述、开展调研、计算RICE分数。
关键要求: 逐个处理用户故事以提升Token效率!
Per-Story Steps:
每个用户故事的处理步骤:
Step 3.1: Load Story Description
步骤3.1:加载用户故事描述
get_issue(id=storyId, includeRelations=false)Extract from Story:
- Feature: Story title
- Customer Problem: From "So that [value]" + Context section
- Solution: From Technical Notes (implementation approach)
- Rationale: From AC + Success Criteria
get_issue(id=storyId, includeRelations=false)从用户故事中提取信息:
- 功能: Story标题
- 用户痛点: 来自“So that [价值]”部分 + 上下文章节
- 解决方案: 来自技术说明(实现方案)
- 理由: 来自验收标准(AC)+ 成功标准
Step 3.2: Research Market Size
步骤3.2:调研市场规模
WebSearch queries (based on depth):
"[customer problem domain] market size TAM 2025"
"[feature type] industry market forecast"mcp__Ref query:
"[domain] market analysis Gartner Statista"Extract:
- Market size: $XB (with unit: B=Billion, M=Million)
- Growth rate: X% CAGR
- Sources: URL + date
Confidence mapping:
- Industry report (Gartner, Statista) → Confidence 0.9-1.0
- News article → Confidence 0.7-0.8
- Blog/Forum → Confidence 0.5-0.6
基于调研深度的WebSearch查询:
"[用户痛点领域] 2025年TAM市场规模"
"[功能类型] 行业市场预测"mcp__Ref查询:
"[领域] Gartner Statista市场分析报告"提取信息:
- 市场规模:$XB(单位:B=十亿,M=百万)
- 增长率:X%复合年增长率(CAGR)
- 数据源:URL + 日期
可信度映射:
- 行业报告(Gartner、Statista)→ 可信度0.9-1.0
- 新闻文章 → 可信度0.7-0.8
- 博客/论坛 → 可信度0.5-0.6
Step 3.3: Research Competition
步骤3.3:调研竞争情况
WebSearch queries:
"[feature] competitors alternatives 2025"
"[solution approach] market leaders"Count competitors and classify:
| Competitors Found | Competition Index | Ocean Type |
|---|---|---|
| 0 | 1 | Blue Ocean |
| 1-2 | 2 | Emerging |
| 3-5 | 3 | Growing |
| 6-10 | 4 | Mature |
| >10 | 5 | Red Ocean |
WebSearch查询:
"[功能] 2025年竞品替代方案"
"[解决方案] 市场领导者"统计竞品数量并分类:
| 竞品数量 | 竞争指数 | 市场类型 |
|---|---|---|
| 0 | 1 | 蓝海 |
| 1-2 | 2 | 新兴市场 |
| 3-5 | 3 | 增长市场 |
| 6-10 | 4 | 成熟市场 |
| >10 | 5 | 红海 |
Step 3.4: Calculate RICE Score
步骤3.4:计算RICE分数
RICE = (Reach x Impact x Confidence) / EffortReach (1-10): Users affected per quarter
| Score | Users | Indicators |
|---|---|---|
| 1-2 | <500 | Niche, single persona |
| 3-4 | 500-2K | Department-level |
| 5-6 | 2K-5K | Organization-wide |
| 7-8 | 5K-10K | Multi-org |
| 9-10 | >10K | Platform-wide |
Impact (0.25-3.0): Business value
| Score | Level | Indicators |
|---|---|---|
| 0.25 | Minimal | Nice-to-have |
| 0.5 | Low | QoL improvement |
| 1.0 | Medium | Efficiency gain |
| 2.0 | High | Revenue driver |
| 3.0 | Massive | Strategic differentiator |
Confidence (0.5-1.0): Data quality (from Step 3.2)
Effort (1-10): Person-months
| Score | Time | Story Indicators |
|---|---|---|
| 1-2 | <2 weeks | 3 AC, simple CRUD |
| 3-4 | 2-4 weeks | 4 AC, integration |
| 5-6 | 1-2 months | 5 AC, complex logic |
| 7-8 | 2-3 months | External dependencies |
| 9-10 | 3+ months | New infrastructure |
RICE = (触达用户数 × 业务影响 × 数据可信度) / 实施成本触达用户数(Reach,1-10): 每季度受影响的用户数
| 分数 | 用户数量 | 指标 |
|---|---|---|
| 1-2 | <500 | 小众场景、单一用户角色 |
| 3-4 | 500-2K | 部门级 |
| 5-6 | 2K-5K | 企业级 |
| 7-8 | 5K-10K | 多企业场景 |
| 9-10 | >10K | 平台级 |
业务影响(Impact,0.25-3.0): 业务价值
| 分数 | 级别 | 指标 |
|---|---|---|
| 0.25 | 极小 | 锦上添花的功能 |
| 0.5 | 低 | 体验优化 |
| 1.0 | 中 | 效率提升 |
| 2.0 | 高 | 收入驱动因素 |
| 3.0 | 极大 | 战略差异化功能 |
数据可信度(Confidence,0.5-1.0): 数据质量(来自步骤3.2)
实施成本(Effort,1-10): 人月数
| 分数 | 时间周期 | 用户故事指标 |
|---|---|---|
| 1-2 | <2周 | 3条验收标准、简单CRUD |
| 3-4 | 2-4周 | 4条验收标准、集成需求 |
| 5-6 | 1-2个月 | 5条验收标准、复杂逻辑 |
| 7-8 | 2-3个月 | 外部依赖 |
| 9-10 | 3个月以上 | 新基础设施搭建 |
Step 3.5: Determine Priority
步骤3.5:确定优先级
| Priority | RICE Threshold | Competition Override |
|---|---|---|
| P0 (Critical) | >= 30 | OR Competition = 1 (Blue Ocean monopoly) |
| P1 (High) | >= 15 | OR Competition <= 2 (Emerging market) |
| P2 (Medium) | >= 5 | - |
| P3 (Low) | < 5 | Competition = 5 (Red Ocean) forces P3 |
| 优先级 | RICE阈值 | 竞争情况覆盖规则 |
|---|---|---|
| P0(关键) | >=30 | 或 竞争指数=1(蓝海垄断) |
| P1(高) | >=15 | 或 竞争指数<=2(新兴市场) |
| P2(中) | >=5 | - |
| P3(低) | <5 | 竞争指数=5(红海)强制设为P3 |
Step 3.6: Store and Clear
步骤3.6:存储与清理
- Append row to in-memory results table
- Clear Story description from context
- Move to next Story in queue
Output per Story: Complete row for prioritization table
- 将当前用户故事的结果行追加到内存中的结果表
- 从上下文中清除用户故事描述
- 处理队列中的下一个用户故事
每个用户故事的输出: 优先级表格的完整行数据
Phase 4: Generate Prioritization Table (5 min)
阶段4:生成优先级表格(5分钟)
Objective: Create consolidated markdown output.
Process:
-
Sort results:
- Primary: Priority (P0 → P3)
- Secondary: RICE score (descending)
-
Generate markdown:
- Use template from references/prioritization_template.md
- Fill: Priority Summary, Main Table, RICE Details, Sources
-
Save file:
Write: docs/market/[epic-slug]/prioritization.md
Output: Saved prioritization.md
目标: 创建合并的Markdown输出文档。
流程:
-
排序结果:
- 主排序:优先级(P0 → P3)
- 次排序:RICE分数(降序)
-
生成Markdown文档:
- 使用references/prioritization_template.md中的模板
- 填充内容:优先级摘要、主表格、RICE详情、数据源
-
保存文件:
Write: docs/market/[epic-slug]/prioritization.md
输出: 已保存的prioritization.md文档
Phase 5: Summary & Next Steps (1 min)
阶段5:总结与下一步建议(1分钟)
Objective: Display results and recommendations.
Output format:
undefined目标: 展示结果与推荐动作。
输出格式:
undefinedPrioritization Complete
优先级排序完成
Epic: [Epic N - Name]
Stories analyzed: X
Time elapsed: Y minutes
史诗(Epic): [Epic N - 名称]
分析的用户故事数量: X
耗时: Y分钟
Priority Distribution:
优先级分布:
- P0 (Critical): X Stories - Implement ASAP
- P1 (High): X Stories - Next sprint
- P2 (Medium): X Stories - Backlog
- P3 (Low): X Stories - Consider deferring
- P0(关键):X个故事 - 立即实施
- P1(高):X个故事 - 下一个迭代执行
- P2(中):X个故事 - 放入待办
- P3(低):X个故事 - 考虑延期
Top 3 Priorities:
前3个优先级最高的故事:
- [Story Title] - RICE: X, Market: $XB, Competition: Blue/Red
- [故事标题] - RICE分数:X,市场规模:$XB,竞争情况:蓝海/红海
Saved to:
文档保存路径:
docs/market/[epic-slug]/prioritization.md
docs/market/[epic-slug]/prioritization.md
Next Steps:
下一步建议:
- Review table with stakeholders
- Run ln-300 for P0/P1 Stories first
- Consider cutting P3 Stories
---- 与利益相关方评审表格
- 先为P0/P1故事运行ln-300
- 考虑取消P3故事
---Time-Box Constraints
时间盒约束
| Depth | Per-Story | Total (10 Stories) |
|---|---|---|
| quick | 2-3 min | 20-30 min |
| standard | 5-7 min | 50-70 min |
| deep | 8-10 min | 80-100 min |
Time management rules:
- If Story exceeds time budget: Skip deep research, use estimates (Confidence 0.5)
- If total exceeds budget: Switch to "quick" depth for remaining Stories
- Parallel WebSearch where possible (market + competition)
| 调研深度 | 每个故事耗时 | 10个故事总耗时 |
|---|---|---|
| quick | 2-3分钟 | 20-30分钟 |
| standard | 5-7分钟 | 50-70分钟 |
| deep | 8-10分钟 | 80-100分钟 |
时间管理规则:
- 若单个故事超出时间预算:跳过深度调研,使用估算值(可信度0.5)
- 若总耗时超出预算:剩余故事切换为“quick”深度
- 尽可能并行执行WebSearch(市场规模+竞争情况)
Token Efficiency
Token效率优化
Loading pattern:
- Phase 2: Metadata only (~50 tokens/Story)
- Phase 3: Full description ONE BY ONE (~3,000-5,000 tokens/Story)
- After each Story: Clear description, keep only result row (~100 tokens)
Memory management:
- Sequential processing (not parallel)
- Maximum context: 1 Story description at a time
- Results accumulate as compact table rows
加载模式:
- 阶段2:仅加载元数据(每个故事约50个Token)
- 阶段3:逐个加载完整描述(每个故事约3000-5000个Token)
- 每个故事处理完成后:清除描述,仅保留结果行(约100个Token)
内存管理:
- 顺序处理(非并行)
- 最大上下文:同时仅保留1个用户故事的描述
- 结果以紧凑的表格行形式累积
Integration with Ecosystem
生态系统集成
Position in workflow:
ln-210 (Scope → Epics)
↓
ln-220 (Epic → Stories)
↓
ln-230 (RICE per Story → prioritization table) ← THIS SKILL
↓
ln-300 (Story → Tasks)Dependencies:
- WebSearch, mcp__Ref (market research)
- Linear MCP (load Epic, Stories)
- Glob, Write, Bash (file operations)
Downstream usage:
- Sprint planning uses P0/P1 to select Stories
- ln-300 processes Stories in priority order
- Stakeholders review before implementation
在工作流中的位置:
ln-210(需求范围 → 史诗)
↓
ln-220(史诗 → 用户故事)
↓
ln-230(每个故事的RICE排序 → 优先级表格) ← 本技能
↓
ln-300(用户故事 → 任务)依赖项:
- WebSearch、mcp__Ref(市场调研)
- Linear MCP(加载史诗、用户故事)
- Glob、Write、Bash(文件操作)
下游用途:
- 迭代规划使用P0/P1选择故事
- ln-300按优先级顺序处理故事
- 利益相关方在实施前进行评审
Critical Rules
核心规则
- Source all data - Every Market number needs source + date
- Prefer recent data - 2024-2025, warn if older
- Cross-reference - 2+ sources for Market size (reduce error)
- Time-box strictly - Skip depth for speed if needed
- Confidence levels - Mark High/Medium/Low for estimates
- No speculation - Only sourced claims, note "[No data]" gaps
- One Story at a time - Token efficiency critical
- Preserve language - If user asks in Russian, respond in Russian
- 所有数据需标注来源 - 每个市场数据都需要来源链接+日期
- 优先使用近期数据 - 优先2024-2025年数据,若数据较旧需提示
- 交叉验证 - 市场规模需至少2个数据源(减少误差)
- 严格遵守时间盒 - 必要时为了速度跳过深度调研
- 标注可信度级别 - 对估算值标记高/中/低可信度
- 不做无依据推测 - 仅使用有来源的信息,空白处标注"[无数据]"
- 逐个处理故事 - Token效率至关重要
- 保留用户语言 - 若用户用俄语提问,用俄语回复
完成标准
- 已在Linear中验证Epic
- 已加载所有用户故事(先元数据,再逐个加载描述)
- 已完成市场调研(每个故事至少2个数据源)
- 已计算每个故事的RICE分数
- 已为每个故事分配竞争指数(1-5)
- 已为每个故事分配优先级(P0/P1/P2/P3)
- 表格已按优先级+RICE分数排序
- 文件已保存到docs/market/[epic-slug]/prioritization.md
- 已生成包含 top 优先级与下一步建议的摘要
- 总耗时在预算内
使用示例
基础用法:
ln-230-story-prioritizer epic="Epic 7"带参数用法:
ln-230-story-prioritizer epic="Epic 7: Translation API" depth="deep"指定特定故事:
ln-230-story-prioritizer epic="Epic 7" stories="US001,US002,US003"示例输出(docs/market/translation-api/prioritization.md):
| 优先级 | 用户痛点 | 功能 | 解决方案 | 理由 | 业务影响 | 市场规模 | 数据源 | 竞争情况 |
|---|---|---|---|---|---|---|---|---|
| P0 | "重复翻译消耗GPU资源" | 翻译记忆库 | Redis缓存、5ms查询 | 减少70-90%的GPU成本 | 高 | $2B+ | M&M | 3 |
| P0 | "无法翻译PDF文件" | PDF支持 | PDF解析+布局保留 | 企业级功能障碍 | 高 | $10B+ | Eden | 5 |
| P1 | "需要视频字幕支持" | SRT/VTT格式支持 | 时间轴同步 | 蓝海机会 | 中 | $5.7B | GMI | 2 |
参考文件
| 文件 | 用途 |
|---|---|
| prioritization_template.md | 输出Markdown模板 |
| rice_scoring_guide.md | RICE因子评分标准与示例 |
| research_queries.md | 按领域分类的WebSearch查询模板 |
| competition_index.md | 蓝海/红海分类规则 |
版本: 1.0.0
最后更新: 2025-12-23
Definition of Done
—
- Epic validated in Linear
- All Stories loaded (metadata, then descriptions per-Story)
- Market research completed (2+ sources per Story)
- RICE score calculated for each Story
- Competition index assigned (1-5)
- Priority assigned (P0/P1/P2/P3)
- Table sorted by Priority + RICE
- File saved to docs/market/[epic-slug]/prioritization.md
- Summary with top priorities and next steps
- Total time within budget
—
Example Usage
—
Basic usage:
ln-230-story-prioritizer epic="Epic 7"With parameters:
ln-230-story-prioritizer epic="Epic 7: Translation API" depth="deep"Specific Stories:
ln-230-story-prioritizer epic="Epic 7" stories="US001,US002,US003"Example output (docs/market/translation-api/prioritization.md):
| Priority | Customer Problem | Feature | Solution | Rationale | Impact | Market | Sources | Competition |
|---|---|---|---|---|---|---|---|---|
| P0 | "Repeat translations cost GPU" | Translation Memory | Redis cache, 5ms lookup | 70-90% GPU cost reduction | High | $2B+ | M&M | 3 |
| P0 | "Can't translate PDF" | PDF Support | PDF parsing + layout | Enterprise blocker | High | $10B+ | Eden | 5 |
| P1 | "Need video subtitles" | SRT/VTT Support | Timing preservation | Blue Ocean opportunity | Medium | $5.7B | GMI | 2 |
—
Reference Files
—
| File | Purpose |
|---|---|
| prioritization_template.md | Output markdown template |
| rice_scoring_guide.md | RICE factor scales and examples |
| research_queries.md | WebSearch query templates by domain |
| competition_index.md | Blue/Red Ocean classification rules |
Version: 1.0.0
Last Updated: 2025-12-23
—