ln-230-story-prioritizer

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

Story Prioritizer

用户故事优先级排序工具

Evaluate Stories using RICE scoring with market research. Generate consolidated prioritization table for Epic.
结合市场调研,使用RICE评分法评估用户故事(Story),为史诗(Epic)生成合并的优先级表格。

Purpose & Scope

目的与适用范围

  • Prioritize Stories AFTER ln-220 creates them
  • Research market size and competition per Story
  • Calculate RICE score for each Story
  • Generate prioritization table (P0/P1/P2/P3)
  • Output: docs/market/[epic-slug]/prioritization.md
  • 对ln-220创建的用户故事(Story)进行优先级排序
  • 针对每个用户故事调研市场规模与竞争情况
  • 计算每个用户故事的RICE分数
  • 生成优先级表格(P0/P1/P2/P3)
  • 输出位置:docs/market/[epic-slug]/prioritization.md

When to Use

使用场景

Use this skill when:
  • Stories created by ln-220, need business prioritization
  • Planning sprint with limited capacity (which Stories first?)
  • Stakeholder review requires data-driven priorities
  • Evaluating feature ROI before implementation
Do NOT use when:
  • Epic has no Stories yet (run ln-220 first)
  • Stories are purely technical (infrastructure, refactoring)
  • Prioritization already exists in docs/market/
Who calls this skill:
  • User (manual) - after ln-220-story-coordinator
  • Future: ln-200-scope-decomposer (optional Phase)

适合使用本技能的场景:
  • ln-220已创建用户故事,需要进行业务优先级排序
  • 规划迭代(Sprint)但资源有限(先做哪些故事?)
  • 利益相关方评审需要数据驱动的优先级结果
  • 实施前评估功能的投资回报率(ROI)
不适合使用的场景:
  • 史诗(Epic)尚未创建任何用户故事(请先运行ln-220)
  • 用户故事为纯技术类(基础设施、代码重构)
  • docs/market/目录下已存在优先级排序文档
调用本技能的角色:
  • 用户(手动) - 在ln-220-story-coordinator之后调用
  • 未来规划: ln-200-scope-decomposer(可选阶段)

Input Parameters

输入参数

ParameterRequiredDescriptionDefault
epicYesEpic ID or "Epic N" format-
storiesNoSpecific Story IDs to prioritizeAll in Epic
depthNoResearch depth (quick/standard/deep)"standard"
depth options:
  • quick
    - 2-3 min/Story, 1 WebSearch per type
  • standard
    - 5-7 min/Story, 2-3 WebSearches per type
  • deep
    - 8-10 min/Story, comprehensive research

参数是否必填描述默认值
epicEpic ID 或 "Epic N" 格式-
stories需要进行优先级排序的特定Story ID该Epic下的所有Story
depth调研深度(quick/standard/deep)"standard"
depth选项说明:
  • quick
    - 每个Story耗时2-3分钟,每种类型执行1次WebSearch
  • standard
    - 每个Story耗时5-7分钟,每种类型执行2-3次WebSearch
  • deep
    - 每个Story耗时8-10分钟,进行全面调研

Output Structure

输出结构

docs/market/[epic-slug]/
└── prioritization.md    # Consolidated table + RICE details + sources
Table columns (from user requirements):
PriorityCustomer ProblemFeatureSolutionRationaleImpactMarketSourcesCompetition
P0User pain pointStory titleTechnical approachWhy importantBusiness impact$XB[Link]Blue 1-3 / Red 4-5

docs/market/[epic-slug]/
└── prioritization.md    # 合并表格 + RICE详情 + 数据源
表格列(来自用户需求):
优先级用户痛点功能解决方案理由业务影响市场规模数据源竞争情况
P0用户痛点描述Story标题技术实现方案重要性说明业务价值$XB[链接]蓝海1-3 / 红海4-5

Research Tools

调研工具

ToolPurposeExample Query
WebSearchMarket size, competitors"[domain] market size 2025"
mcp__RefIndustry reports"[domain] market analysis report"
LinearLoad Storieslist_issues(project=Epic.id)
GlobCheck existing"docs/market/[epic]/*"

工具用途查询示例
WebSearch市场规模、竞品调研"[领域] 2025年市场规模"
mcp__Ref行业报告查询"[领域] 市场分析报告"
Linear加载用户故事list_issues(project=Epic.id)
Glob检查已有文档"docs/market/[epic]/*"

Workflow

工作流程

Phase 1: Discovery (2 min)

阶段1:发现与验证(2分钟)

Objective: Validate input and prepare context.
Process:
  1. Parse Epic input:
    • Accept: Epic ID, "Epic N", or Linear Project URL
    • Query:
      get_project(query=epic)
    • Extract: Epic ID, title, description
  2. Auto-discover configuration:
    • Read
      docs/tasks/kanban_board.md
      for Team ID
    • Slugify Epic title for output path
  3. Check existing prioritization:
    Glob: docs/market/[epic-slug]/prioritization.md
    • If exists: Ask "Update existing or create new?"
    • If new: Continue
  4. Create output directory:
    bash
    mkdir -p docs/market/[epic-slug]/
Output: Epic metadata, output path, existing check result

目标: 验证输入并准备上下文信息。
流程:
  1. 解析Epic输入:
    • 支持格式:Epic ID、"Epic N" 或 Linear项目URL
    • 查询:
      get_project(query=epic)
    • 提取信息:Epic ID、标题、描述
  2. 自动发现配置:
    • 读取
      docs/tasks/kanban_board.md
      获取团队ID
    • 将Epic标题转换为短标识(slug)用于输出路径
  3. 检查已有优先级文档:
    Glob: docs/market/[epic-slug]/prioritization.md
    • 若已存在:询问“更新现有文档还是创建新文档?”
    • 若不存在:继续流程
  4. 创建输出目录:
    bash
    mkdir -p docs/market/[epic-slug]/
输出: Epic元数据、输出路径、已有文档检查结果

Phase 2: Load Stories Metadata (3 min)

阶段2:加载用户故事元数据(3分钟)

Objective: Build Story queue with metadata only (token efficiency).
Process:
  1. Query Stories from Epic:
    list_issues(project=Epic.id, label="user-story")
  2. Extract metadata only:
    • Story ID, title, status
    • DO NOT load full descriptions yet
  3. Filter Stories:
    • Exclude: Done, Cancelled, Archived
    • Include: Backlog, Todo, In Progress
  4. Build processing queue:
    • Order by: existing priority (if any), then by ID
    • Count: N Stories to process
Output: Story queue (ID + title), ~50 tokens/Story

目标: 仅加载元数据构建用户故事处理队列(提升Token效率)。
流程:
  1. 从Epic查询用户故事:
    list_issues(project=Epic.id, label="user-story")
  2. 仅提取元数据:
    • Story ID、标题、状态
    • 请勿加载完整描述
  3. 过滤用户故事:
    • 排除:已完成、已取消、已归档
    • 包含:待办、计划中、进行中
  4. 构建处理队列:
    • 排序规则:现有优先级(若有),然后按ID排序
    • 统计:待处理的用户故事数量
输出: 用户故事队列(ID + 标题),每个故事约50个Token

Phase 3: Story-by-Story Analysis Loop (5-10 min/Story)

阶段3:逐个用户故事分析循环(每个故事5-10分钟)

Objective: For EACH Story: load description, research, score RICE.
Critical: Process Stories ONE BY ONE for token efficiency!
目标: 针对每个用户故事:加载描述、开展调研、计算RICE分数。
关键要求: 逐个处理用户故事以提升Token效率!

Per-Story Steps:

每个用户故事的处理步骤:

Step 3.1: Load Story Description
步骤3.1:加载用户故事描述
get_issue(id=storyId, includeRelations=false)
Extract from Story:
  • Feature: Story title
  • Customer Problem: From "So that [value]" + Context section
  • Solution: From Technical Notes (implementation approach)
  • Rationale: From AC + Success Criteria
get_issue(id=storyId, includeRelations=false)
从用户故事中提取信息:
  • 功能: Story标题
  • 用户痛点: 来自“So that [价值]”部分 + 上下文章节
  • 解决方案: 来自技术说明(实现方案)
  • 理由: 来自验收标准(AC)+ 成功标准
Step 3.2: Research Market Size
步骤3.2:调研市场规模
WebSearch queries (based on depth):
"[customer problem domain] market size TAM 2025"
"[feature type] industry market forecast"
mcp__Ref query:
"[domain] market analysis Gartner Statista"
Extract:
  • Market size: $XB (with unit: B=Billion, M=Million)
  • Growth rate: X% CAGR
  • Sources: URL + date
Confidence mapping:
  • Industry report (Gartner, Statista) → Confidence 0.9-1.0
  • News article → Confidence 0.7-0.8
  • Blog/Forum → Confidence 0.5-0.6
基于调研深度的WebSearch查询:
"[用户痛点领域] 2025年TAM市场规模"
"[功能类型] 行业市场预测"
mcp__Ref查询:
"[领域] Gartner Statista市场分析报告"
提取信息:
  • 市场规模:$XB(单位:B=十亿,M=百万)
  • 增长率:X%复合年增长率(CAGR)
  • 数据源:URL + 日期
可信度映射:
  • 行业报告(Gartner、Statista)→ 可信度0.9-1.0
  • 新闻文章 → 可信度0.7-0.8
  • 博客/论坛 → 可信度0.5-0.6
Step 3.3: Research Competition
步骤3.3:调研竞争情况
WebSearch queries:
"[feature] competitors alternatives 2025"
"[solution approach] market leaders"
Count competitors and classify:
Competitors FoundCompetition IndexOcean Type
01Blue Ocean
1-22Emerging
3-53Growing
6-104Mature
>105Red Ocean
WebSearch查询:
"[功能] 2025年竞品替代方案"
"[解决方案] 市场领导者"
统计竞品数量并分类:
竞品数量竞争指数市场类型
01蓝海
1-22新兴市场
3-53增长市场
6-104成熟市场
>105红海
Step 3.4: Calculate RICE Score
步骤3.4:计算RICE分数
RICE = (Reach x Impact x Confidence) / Effort
Reach (1-10): Users affected per quarter
ScoreUsersIndicators
1-2<500Niche, single persona
3-4500-2KDepartment-level
5-62K-5KOrganization-wide
7-85K-10KMulti-org
9-10>10KPlatform-wide
Impact (0.25-3.0): Business value
ScoreLevelIndicators
0.25MinimalNice-to-have
0.5LowQoL improvement
1.0MediumEfficiency gain
2.0HighRevenue driver
3.0MassiveStrategic differentiator
Confidence (0.5-1.0): Data quality (from Step 3.2)
Effort (1-10): Person-months
ScoreTimeStory Indicators
1-2<2 weeks3 AC, simple CRUD
3-42-4 weeks4 AC, integration
5-61-2 months5 AC, complex logic
7-82-3 monthsExternal dependencies
9-103+ monthsNew infrastructure
RICE = (触达用户数 × 业务影响 × 数据可信度) / 实施成本
触达用户数(Reach,1-10): 每季度受影响的用户数
分数用户数量指标
1-2<500小众场景、单一用户角色
3-4500-2K部门级
5-62K-5K企业级
7-85K-10K多企业场景
9-10>10K平台级
业务影响(Impact,0.25-3.0): 业务价值
分数级别指标
0.25极小锦上添花的功能
0.5体验优化
1.0效率提升
2.0收入驱动因素
3.0极大战略差异化功能
数据可信度(Confidence,0.5-1.0): 数据质量(来自步骤3.2)
实施成本(Effort,1-10): 人月数
分数时间周期用户故事指标
1-2<2周3条验收标准、简单CRUD
3-42-4周4条验收标准、集成需求
5-61-2个月5条验收标准、复杂逻辑
7-82-3个月外部依赖
9-103个月以上新基础设施搭建
Step 3.5: Determine Priority
步骤3.5:确定优先级
PriorityRICE ThresholdCompetition Override
P0 (Critical)>= 30OR Competition = 1 (Blue Ocean monopoly)
P1 (High)>= 15OR Competition <= 2 (Emerging market)
P2 (Medium)>= 5-
P3 (Low)< 5Competition = 5 (Red Ocean) forces P3
优先级RICE阈值竞争情况覆盖规则
P0(关键)>=30或 竞争指数=1(蓝海垄断)
P1(高)>=15或 竞争指数<=2(新兴市场)
P2(中)>=5-
P3(低)<5竞争指数=5(红海)强制设为P3
Step 3.6: Store and Clear
步骤3.6:存储与清理
  • Append row to in-memory results table
  • Clear Story description from context
  • Move to next Story in queue
Output per Story: Complete row for prioritization table

  • 将当前用户故事的结果行追加到内存中的结果表
  • 从上下文中清除用户故事描述
  • 处理队列中的下一个用户故事
每个用户故事的输出: 优先级表格的完整行数据

Phase 4: Generate Prioritization Table (5 min)

阶段4:生成优先级表格(5分钟)

Objective: Create consolidated markdown output.
Process:
  1. Sort results:
    • Primary: Priority (P0 → P3)
    • Secondary: RICE score (descending)
  2. Generate markdown:
    • Use template from references/prioritization_template.md
    • Fill: Priority Summary, Main Table, RICE Details, Sources
  3. Save file:
    Write: docs/market/[epic-slug]/prioritization.md
Output: Saved prioritization.md

目标: 创建合并的Markdown输出文档。
流程:
  1. 排序结果:
    • 主排序:优先级(P0 → P3)
    • 次排序:RICE分数(降序)
  2. 生成Markdown文档:
    • 使用references/prioritization_template.md中的模板
    • 填充内容:优先级摘要、主表格、RICE详情、数据源
  3. 保存文件:
    Write: docs/market/[epic-slug]/prioritization.md
输出: 已保存的prioritization.md文档

Phase 5: Summary & Next Steps (1 min)

阶段5:总结与下一步建议(1分钟)

Objective: Display results and recommendations.
Output format:
undefined
目标: 展示结果与推荐动作。
输出格式:
undefined

Prioritization Complete

优先级排序完成

Epic: [Epic N - Name] Stories analyzed: X Time elapsed: Y minutes
史诗(Epic): [Epic N - 名称] 分析的用户故事数量: X 耗时: Y分钟

Priority Distribution:

优先级分布:

  • P0 (Critical): X Stories - Implement ASAP
  • P1 (High): X Stories - Next sprint
  • P2 (Medium): X Stories - Backlog
  • P3 (Low): X Stories - Consider deferring
  • P0(关键):X个故事 - 立即实施
  • P1(高):X个故事 - 下一个迭代执行
  • P2(中):X个故事 - 放入待办
  • P3(低):X个故事 - 考虑延期

Top 3 Priorities:

前3个优先级最高的故事:

  1. [Story Title] - RICE: X, Market: $XB, Competition: Blue/Red
  1. [故事标题] - RICE分数:X,市场规模:$XB,竞争情况:蓝海/红海

Saved to:

文档保存路径:

docs/market/[epic-slug]/prioritization.md
docs/market/[epic-slug]/prioritization.md

Next Steps:

下一步建议:

  1. Review table with stakeholders
  2. Run ln-300 for P0/P1 Stories first
  3. Consider cutting P3 Stories

---
  1. 与利益相关方评审表格
  2. 先为P0/P1故事运行ln-300
  3. 考虑取消P3故事

---

Time-Box Constraints

时间盒约束

DepthPer-StoryTotal (10 Stories)
quick2-3 min20-30 min
standard5-7 min50-70 min
deep8-10 min80-100 min
Time management rules:
  • If Story exceeds time budget: Skip deep research, use estimates (Confidence 0.5)
  • If total exceeds budget: Switch to "quick" depth for remaining Stories
  • Parallel WebSearch where possible (market + competition)

调研深度每个故事耗时10个故事总耗时
quick2-3分钟20-30分钟
standard5-7分钟50-70分钟
deep8-10分钟80-100分钟
时间管理规则:
  • 若单个故事超出时间预算:跳过深度调研,使用估算值(可信度0.5)
  • 若总耗时超出预算:剩余故事切换为“quick”深度
  • 尽可能并行执行WebSearch(市场规模+竞争情况)

Token Efficiency

Token效率优化

Loading pattern:
  • Phase 2: Metadata only (~50 tokens/Story)
  • Phase 3: Full description ONE BY ONE (~3,000-5,000 tokens/Story)
  • After each Story: Clear description, keep only result row (~100 tokens)
Memory management:
  • Sequential processing (not parallel)
  • Maximum context: 1 Story description at a time
  • Results accumulate as compact table rows

加载模式:
  • 阶段2:仅加载元数据(每个故事约50个Token)
  • 阶段3:逐个加载完整描述(每个故事约3000-5000个Token)
  • 每个故事处理完成后:清除描述,仅保留结果行(约100个Token)
内存管理:
  • 顺序处理(非并行)
  • 最大上下文:同时仅保留1个用户故事的描述
  • 结果以紧凑的表格行形式累积

Integration with Ecosystem

生态系统集成

Position in workflow:
ln-210 (Scope → Epics)
ln-220 (Epic → Stories)
ln-230 (RICE per Story → prioritization table) ← THIS SKILL
ln-300 (Story → Tasks)
Dependencies:
  • WebSearch, mcp__Ref (market research)
  • Linear MCP (load Epic, Stories)
  • Glob, Write, Bash (file operations)
Downstream usage:
  • Sprint planning uses P0/P1 to select Stories
  • ln-300 processes Stories in priority order
  • Stakeholders review before implementation

在工作流中的位置:
ln-210(需求范围 → 史诗)
ln-220(史诗 → 用户故事)
ln-230(每个故事的RICE排序 → 优先级表格) ← 本技能
ln-300(用户故事 → 任务)
依赖项:
  • WebSearch、mcp__Ref(市场调研)
  • Linear MCP(加载史诗、用户故事)
  • Glob、Write、Bash(文件操作)
下游用途:
  • 迭代规划使用P0/P1选择故事
  • ln-300按优先级顺序处理故事
  • 利益相关方在实施前进行评审

Critical Rules

核心规则

  1. Source all data - Every Market number needs source + date
  2. Prefer recent data - 2024-2025, warn if older
  3. Cross-reference - 2+ sources for Market size (reduce error)
  4. Time-box strictly - Skip depth for speed if needed
  5. Confidence levels - Mark High/Medium/Low for estimates
  6. No speculation - Only sourced claims, note "[No data]" gaps
  7. One Story at a time - Token efficiency critical
  8. Preserve language - If user asks in Russian, respond in Russian

  1. 所有数据需标注来源 - 每个市场数据都需要来源链接+日期
  2. 优先使用近期数据 - 优先2024-2025年数据,若数据较旧需提示
  3. 交叉验证 - 市场规模需至少2个数据源(减少误差)
  4. 严格遵守时间盒 - 必要时为了速度跳过深度调研
  5. 标注可信度级别 - 对估算值标记高/中/低可信度
  6. 不做无依据推测 - 仅使用有来源的信息,空白处标注"[无数据]"
  7. 逐个处理故事 - Token效率至关重要
  8. 保留用户语言 - 若用户用俄语提问,用俄语回复

完成标准
  • 已在Linear中验证Epic
  • 已加载所有用户故事(先元数据,再逐个加载描述)
  • 已完成市场调研(每个故事至少2个数据源)
  • 已计算每个故事的RICE分数
  • 已为每个故事分配竞争指数(1-5)
  • 已为每个故事分配优先级(P0/P1/P2/P3)
  • 表格已按优先级+RICE分数排序
  • 文件已保存到docs/market/[epic-slug]/prioritization.md
  • 已生成包含 top 优先级与下一步建议的摘要
  • 总耗时在预算内

使用示例
基础用法:
ln-230-story-prioritizer epic="Epic 7"
带参数用法:
ln-230-story-prioritizer epic="Epic 7: Translation API" depth="deep"
指定特定故事:
ln-230-story-prioritizer epic="Epic 7" stories="US001,US002,US003"
示例输出(docs/market/translation-api/prioritization.md):
优先级用户痛点功能解决方案理由业务影响市场规模数据源竞争情况
P0"重复翻译消耗GPU资源"翻译记忆库Redis缓存、5ms查询减少70-90%的GPU成本$2B+M&M3
P0"无法翻译PDF文件"PDF支持PDF解析+布局保留企业级功能障碍$10B+Eden5
P1"需要视频字幕支持"SRT/VTT格式支持时间轴同步蓝海机会$5.7BGMI2

参考文件
文件用途
prioritization_template.md输出Markdown模板
rice_scoring_guide.mdRICE因子评分标准与示例
research_queries.md按领域分类的WebSearch查询模板
competition_index.md蓝海/红海分类规则

版本: 1.0.0 最后更新: 2025-12-23

Definition of Done

  • Epic validated in Linear
  • All Stories loaded (metadata, then descriptions per-Story)
  • Market research completed (2+ sources per Story)
  • RICE score calculated for each Story
  • Competition index assigned (1-5)
  • Priority assigned (P0/P1/P2/P3)
  • Table sorted by Priority + RICE
  • File saved to docs/market/[epic-slug]/prioritization.md
  • Summary with top priorities and next steps
  • Total time within budget

Example Usage

Basic usage:
ln-230-story-prioritizer epic="Epic 7"
With parameters:
ln-230-story-prioritizer epic="Epic 7: Translation API" depth="deep"
Specific Stories:
ln-230-story-prioritizer epic="Epic 7" stories="US001,US002,US003"
Example output (docs/market/translation-api/prioritization.md):
PriorityCustomer ProblemFeatureSolutionRationaleImpactMarketSourcesCompetition
P0"Repeat translations cost GPU"Translation MemoryRedis cache, 5ms lookup70-90% GPU cost reductionHigh$2B+M&M3
P0"Can't translate PDF"PDF SupportPDF parsing + layoutEnterprise blockerHigh$10B+Eden5
P1"Need video subtitles"SRT/VTT SupportTiming preservationBlue Ocean opportunityMedium$5.7BGMI2

Reference Files

FilePurpose
prioritization_template.mdOutput markdown template
rice_scoring_guide.mdRICE factor scales and examples
research_queries.mdWebSearch query templates by domain
competition_index.mdBlue/Red Ocean classification rules

Version: 1.0.0 Last Updated: 2025-12-23