ln-521-test-researcher

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

Test Researcher

测试研究员

Researches real-world problems and edge cases before test planning to ensure tests cover actual user pain points, not just AC.
在测试规划前调研真实场景问题和边缘案例,确保测试覆盖用户实际痛点,而非仅覆盖验收标准(AC)。

Purpose & Scope

目的与范围

  • Research common problems for the feature domain using Web Search, MCP Ref, Context7.
  • Analyze how competitors solve the same problem.
  • Find customer complaints and pain points from forums, StackOverflow, Reddit.
  • Post structured findings as Linear comment for downstream skills (ln-522, ln-523).
  • No test creation or status changes.
  • 使用Web Search、MCP Ref、Context7调研功能领域的常见问题。
  • 分析竞品如何解决相同问题。
  • 从论坛、StackOverflow、Reddit收集客户投诉与痛点。
  • 将结构化调研结果作为Linear评论发布,供下游技能(ln-522、ln-523)使用。
  • 不负责测试创建或状态变更。

When to Use

使用场景

This skill should be used when:
  • Invoked by ln-520-test-planner at start of test planning pipeline
  • Story has non-trivial functionality (external APIs, file formats, authentication)
  • Need to discover edge cases beyond AC
Skip research when:
  • Story is trivial (simple CRUD, no external dependencies)
  • Research comment already exists on Story
  • User explicitly requests to skip
本技能适用于以下场景:
  • 在测试规划流程起始阶段由ln-520-test-planner调用
  • 用户故事包含非基础功能(如外部API、文件格式、认证机制)
  • 需要发现验收标准之外的边缘案例
跳过调研的场景:
  • 用户故事较为基础(如简单CRUD操作,无外部依赖)
  • 该用户故事已存在调研评论
  • 用户明确要求跳过调研

Workflow

工作流程

Phase 1: Discovery

阶段1:发现

Auto-discover Team ID from
docs/tasks/kanban_board.md
.
Input: Story ID from orchestrator (ln-520)
docs/tasks/kanban_board.md
中自动获取团队ID。
输入: 由编排器(ln-520)提供的用户故事ID

Phase 2: Extract Feature Domain

阶段2:提取功能领域

  1. Fetch Story from Linear
  2. Parse Story goal and AC to identify:
    • What technology/API/format is involved?
    • What is the user's goal? (e.g., "translate XLIFF files", "authenticate via OAuth")
  3. Extract keywords for research queries
  1. 从Linear获取用户故事
  2. 解析用户故事目标和验收标准(AC),确定:
    • 涉及哪些技术/API/格式?
    • 用户的目标是什么?(例如:“翻译XLIFF文件”、“通过OAuth认证”)
  3. 提取用于调研查询的关键词

Phase 3: Research Common Problems

阶段3:调研常见问题

Use available tools to find real-world problems:
  1. Web Search:
    • "[feature] common problems"
    • "[format] edge cases"
    • "[API] gotchas"
    • "[technology] known issues"
  2. MCP Ref:
    • ref_search_documentation("[feature] error handling best practices")
    • ref_search_documentation("[format] validation rules")
  3. Context7:
    • Query relevant library docs for known issues
    • Check API documentation for limitations
使用可用工具查找真实场景问题:
  1. Web搜索:
    • "[功能] 常见问题"
    • "[格式] 边缘案例"
    • "[API] 注意事项"
    • "[技术] 已知问题"
  2. MCP Ref:
    • ref_search_documentation("[feature] error handling best practices")
    • ref_search_documentation("[format] validation rules")
  3. Context7:
    • 查询相关库文档中的已知问题
    • 检查API文档中的限制条件

Phase 4: Research Competitor Solutions

阶段4:调研竞品解决方案

  1. Web Search:
    • "[competitor] [feature] how it works"
    • "[feature] comparison"
    • "[product type] best practices"
  2. Analysis:
    • How do market leaders handle this functionality?
    • What UX patterns do they use?
    • What error handling approaches are common?
  1. Web搜索:
    • "[竞品] [功能] 实现方式"
    • "[功能] 对比分析"
    • "[产品类型] 最佳实践"
  2. 分析:
    • 市场头部产品如何处理该功能?
    • 它们采用了哪些UX模式?
    • 常见的错误处理方式有哪些?

Phase 5: Research Customer Complaints

阶段5:调研客户投诉

  1. Web Search:
    • "[feature] complaints"
    • "[product type] user problems"
    • "[format] issues reddit"
    • "[format] issues stackoverflow"
  2. Analysis:
    • What do users actually struggle with?
    • What are common frustrations?
    • What gaps exist between user expectations and typical implementations?
  1. Web搜索:
    • "[功能] 投诉"
    • "[产品类型] 用户问题"
    • "[格式] 问题 site:reddit.com"
    • "[格式] 问题 site:stackoverflow.com"
  2. 分析:
    • 用户实际面临哪些困扰?
    • 常见的不满点有哪些?
    • 用户期望与典型实现之间存在哪些差距?

Phase 6: Compile and Post Findings

阶段6:整理并发布调研结果

  1. Compile findings into categories:
    • Input validation issues (malformed data, encoding, size limits)
    • Edge cases (empty input, special characters, Unicode)
    • Error handling (timeouts, rate limits, partial failures)
    • Security concerns (injection, authentication bypass)
    • Competitor advantages (features we should match or exceed)
    • Customer pain points (problems users actually complain about)
  2. Post Linear comment on Story with research summary:
markdown
undefined
  1. 将调研结果按类别整理:
    • 输入验证问题(格式错误的数据、编码问题、大小限制)
    • 边缘案例(空输入、特殊字符、Unicode)
    • 错误处理(超时、速率限制、部分失败)
    • 安全问题(注入攻击、认证绕过)
    • 竞品优势(我们需要匹配或超越的功能)
    • 客户痛点(用户实际投诉的问题)
  2. 在用户故事下发布Linear评论,内容为调研摘要:
markdown
undefined

Test Research: {Feature}

Test Research: {Feature}

Sources Consulted

Sources Consulted

  • Source 1
  • Source 2
  • Source 1
  • Source 2

Common Problems Found

Common Problems Found

  1. Problem 1: Description + test case suggestion
  2. Problem 2: Description + test case suggestion
  1. Problem 1: Description + test case suggestion
  2. Problem 2: Description + test case suggestion

Competitor Analysis

Competitor Analysis

  • Competitor A: How they handle this + what we can learn
  • Competitor B: Their approach + gaps we can exploit
  • Competitor A: How they handle this + what we can learn
  • Competitor B: Their approach + gaps we can exploit

Customer Pain Points

Customer Pain Points

  • Complaint 1: What users struggle with + test to prevent
  • Complaint 2: Common frustration + how to verify we solve it
  • Complaint 1: What users struggle with + test to prevent
  • Complaint 2: Common frustration + how to verify we solve it

Recommended Test Coverage

Recommended Test Coverage

  • Test case for problem 1
  • Test case for competitor parity
  • Test case for customer pain point

This research informs both manual tests (ln-522) and automated tests (ln-523).
undefined
  • Test case for problem 1
  • Test case for competitor parity
  • Test case for customer pain point

This research informs both manual tests (ln-522) and automated tests (ln-523).
undefined

Critical Rules

关键规则

  • No test creation: Only research and documentation.
  • No status changes: Only Linear comment.
  • Source attribution: Always include URLs for sources consulted.
  • Actionable findings: Each problem should suggest a test case.
  • Skip trivial Stories: Don't research "Add button to page".
  • 禁止创建测试: 仅负责调研与文档记录。
  • 禁止修改状态: 仅发布Linear评论。
  • 来源标注: 始终包含所参考来源的URL。
  • 可落地的调研结果: 每个问题都应对应一个测试用例建议。
  • 跳过基础用户故事: 无需调研“为页面添加按钮”这类简单需求。

Definition of Done

完成标准

  • Feature domain extracted from Story (technology/API/format identified)
  • Common problems researched (Web Search + MCP Ref + Context7)
  • Competitor solutions analyzed (at least 1-2 competitors)
  • Customer complaints found (forums, StackOverflow, Reddit)
  • Findings compiled into categories
  • Linear comment posted with "## Test Research: {Feature}" header
  • At least 3 recommended test cases suggested
Output: Linear comment with research findings for ln-522 and ln-523 to use.
  • 从用户故事中提取功能领域(已识别技术/API/格式)
  • 已调研常见问题(Web搜索 + MCP Ref + Context7)
  • 已分析竞品解决方案(至少1-2个竞品)
  • 已收集客户投诉(论坛、StackOverflow、Reddit)
  • 调研结果已按类别整理
  • 已发布带有“## Test Research: {Feature}”标题的Linear评论
  • 至少提出3个推荐测试用例
输出: 包含调研结果的Linear评论,供ln-522和ln-523使用。

Reference Files

参考文件

  • Research methodology: Web Search, MCP Ref, Context7 tools
  • Comment format: Structured markdown with sources
  • Downstream consumers: ln-522-manual-tester, ln-523-auto-test-planner

Version: 1.0.0 Last Updated: 2026-01-15
  • 调研方法:Web Search、MCP Ref、Context7工具
  • 评论格式:带来源的结构化Markdown
  • 下游使用者:ln-522-manual-tester、ln-523-auto-test-planner

版本: 1.0.0 最后更新: 2026-01-15