ln-511-test-researcher
Compare original and translation side by side
🇺🇸
Original
English🇨🇳
Translation
ChineseTest Researcher
测试研究员
Researches real-world problems and edge cases before test planning to ensure tests cover actual user pain points, not just AC.
在测试规划前调研实际场景问题和边缘案例,确保测试覆盖真实用户的痛点,而非仅覆盖验收标准(AC)。
Purpose & Scope
目标与范围
- Research common problems for the feature domain using Web Search, MCP Ref, Context7.
- Analyze how competitors solve the same problem.
- Find customer complaints and pain points from forums, StackOverflow, Reddit.
- Post structured findings as Linear comment for downstream skills (ln-512, ln-513).
- No test creation or status changes.
- 使用Web Search、MCP Ref、Context7调研功能领域的常见问题。
- 分析竞品如何解决相同问题。
- 从论坛、StackOverflow、Reddit收集客户的投诉与痛点。
- 将结构化的调研结果作为Linear评论发布,供下游技能(ln-512、ln-513)使用。
- 不创建测试用例,也不修改状态。
When to Use
使用场景
This skill should be used when:
- Invoked by ln-510-test-planner at start of test planning pipeline
- Story has non-trivial functionality (external APIs, file formats, authentication)
- Need to discover edge cases beyond AC
Skip research when:
- Story is trivial (simple CRUD, no external dependencies)
- Research comment already exists on Story
- User explicitly requests to skip
应在以下场景使用本技能:
- 测试规划流程启动时,由ln-510-test-planner调用
- 用户故事包含非基础功能(如外部API、文件格式、认证机制)
- 需要发现验收标准之外的边缘案例
跳过调研的场景:
- 用户故事非常简单(如基础CRUD操作,无外部依赖)
- 用户故事上已存在调研评论
- 用户明确要求跳过调研
Workflow
工作流程
Phase 1: Discovery
阶段1:自动发现
Auto-discover Team ID from .
docs/tasks/kanban_board.mdInput: Story ID from orchestrator (ln-510)
从中自动获取团队ID。
docs/tasks/kanban_board.md输入: 来自编排器(ln-510)的用户故事ID
Phase 2: Extract Feature Domain
阶段2:提取功能领域
- Fetch Story from Linear
- Parse Story goal and AC to identify:
- What technology/API/format is involved?
- What is the user's goal? (e.g., "translate XLIFF files", "authenticate via OAuth")
- Extract keywords for research queries
- 从Linear获取用户故事
- 解析用户故事目标和验收标准(AC),确定:
- 涉及哪些技术/API/格式?
- 用户的目标是什么?(例如:“翻译XLIFF文件”、“通过OAuth认证”)
- 提取用于调研查询的关键词
Phase 3: Research Common Problems
阶段3:调研常见问题
Use available tools to find real-world problems:
-
Web Search:
- "[feature] common problems"
- "[format] edge cases"
- "[API] gotchas"
- "[technology] known issues"
-
MCP Ref:
ref_search_documentation("[feature] error handling best practices")ref_search_documentation("[format] validation rules")
-
Context7:
- Query relevant library docs for known issues
- Check API documentation for limitations
使用可用工具查找实际场景中的问题:
-
Web搜索:
- "[功能] 常见问题"
- "[格式] 边缘案例"
- "[API] 注意事项"
- "[技术] 已知问题"
-
MCP Ref:
ref_search_documentation("[功能] 错误处理最佳实践")ref_search_documentation("[格式] 验证规则")
-
Context7:
- 查询相关库文档中的已知问题
- 检查API文档中的限制条件
Phase 4: Research Competitor Solutions
阶段4:调研竞品解决方案
-
Web Search:
- "[competitor] [feature] how it works"
- "[feature] comparison"
- "[product type] best practices"
-
Analysis:
- How do market leaders handle this functionality?
- What UX patterns do they use?
- What error handling approaches are common?
-
Web搜索:
- "[竞品名称] [功能] 实现方式"
- "[功能] 对比分析"
- "[产品类型] 最佳实践"
-
分析:
- 市场头部产品如何处理该功能?
- 它们使用了哪些UX模式?
- 常见的错误处理方式有哪些?
Phase 5: Research Customer Complaints
阶段5:调研客户反馈
-
Web Search:
- "[feature] complaints"
- "[product type] user problems"
- "[format] issues reddit"
- "[format] issues stackoverflow"
-
Analysis:
- What do users actually struggle with?
- What are common frustrations?
- What gaps exist between user expectations and typical implementations?
-
Web搜索:
- "[功能] 用户投诉"
- "[产品类型] 用户问题"
- "[格式] Reddit相关问题"
- "[格式] StackOverflow相关问题"
-
分析:
- 用户实际遇到的困难是什么?
- 常见的不满点有哪些?
- 用户期望与典型实现之间存在哪些差距?
Phase 6: Compile and Post Findings
阶段6:整理并发布调研结果
-
Compile findings into categories:
- Input validation issues (malformed data, encoding, size limits)
- Edge cases (empty input, special characters, Unicode)
- Error handling (timeouts, rate limits, partial failures)
- Security concerns (injection, authentication bypass)
- Competitor advantages (features we should match or exceed)
- Customer pain points (problems users actually complain about)
-
Post Linear comment on Story with research summary:
markdown
undefined-
将调研结果按类别整理:
- 输入验证问题(格式错误的数据、编码问题、大小限制)
- 边缘案例(空输入、特殊字符、Unicode编码)
- 错误处理(超时、速率限制、部分失败)
- 安全隐患(注入攻击、认证绕过)
- 竞品优势(我们需要对标或超越的功能)
- 客户痛点(用户实际投诉的问题)
-
在用户故事上发布Linear评论,内容为调研摘要:
markdown
undefinedTest Research: {Feature}
Test Research: {Feature}
Sources Consulted
Sources Consulted
- Source 1
- Source 2
- Source 1
- Source 2
Common Problems Found
Common Problems Found
- Problem 1: Description + test case suggestion
- Problem 2: Description + test case suggestion
- Problem 1: Description + test case suggestion
- Problem 2: Description + test case suggestion
Competitor Analysis
Competitor Analysis
- Competitor A: How they handle this + what we can learn
- Competitor B: Their approach + gaps we can exploit
- Competitor A: How they handle this + what we can learn
- Competitor B: Their approach + gaps we can exploit
Customer Pain Points
Customer Pain Points
- Complaint 1: What users struggle with + test to prevent
- Complaint 2: Common frustration + how to verify we solve it
- Complaint 1: What users struggle with + test to prevent
- Complaint 2: Common frustration + how to verify we solve it
Recommended Test Coverage
Recommended Test Coverage
- Test case for problem 1
- Test case for competitor parity
- Test case for customer pain point
This research informs both manual tests (ln-512) and automated tests (ln-513).
undefined- Test case for problem 1
- Test case for competitor parity
- Test case for customer pain point
This research informs both manual tests (ln-512) and automated tests (ln-513).
undefinedCritical Rules
核心规则
- No test creation: Only research and documentation.
- No status changes: Only Linear comment.
- Source attribution: Always include URLs for sources consulted.
- Actionable findings: Each problem should suggest a test case.
- Skip trivial Stories: Don't research "Add button to page".
- 不创建测试用例: 仅进行调研与文档记录。
- 不修改状态: 仅发布Linear评论。
- 来源标注: 始终包含所参考来源的URL。
- 可落地的结果: 每个问题都应对应一个测试用例建议。
- 跳过简单用户故事: 不要调研“在页面添加按钮”这类简单需求。
Definition of Done
完成标准
- Feature domain extracted from Story (technology/API/format identified)
- Common problems researched (Web Search + MCP Ref + Context7)
- Competitor solutions analyzed (at least 1-2 competitors)
- Customer complaints found (forums, StackOverflow, Reddit)
- Findings compiled into categories
- Linear comment posted with "## Test Research: {Feature}" header
- At least 3 recommended test cases suggested
Output: Linear comment with research findings for ln-512 and ln-513 to use.
- 从用户故事中提取出功能领域(确定涉及的技术/API/格式)
- 完成常见问题调研(Web Search + MCP Ref + Context7)
- 完成竞品解决方案分析(至少分析1-2个竞品)
- 找到客户投诉的问题(来自论坛、StackOverflow、Reddit)
- 调研结果已按类别整理
- 发布了带有“## Test Research: {Feature}”标题的Linear评论
- 至少提出3条测试用例建议
输出: 包含调研结果的Linear评论,供ln-512和ln-513使用。
Reference Files
参考文件
- Research methodology: Web Search, MCP Ref, Context7 tools
- Comment format: Structured markdown with sources
- Downstream consumers: ln-512-manual-tester, ln-513-auto-test-planner
Version: 1.0.0 (Initial release - extracted from ln-503-manual-tester Phase 0)
Last Updated: 2026-01-15
- 调研方法:Web Search、MCP Ref、Context7工具
- 评论格式:带来源的结构化Markdown
- 下游使用者:ln-512-manual-tester、ln-513-auto-test-planner
版本: 1.0.0(初始版本 - 从ln-503-manual-tester的第0阶段提取)
最后更新日期: 2026-01-15