software-ux-research
Compare original and translation side by side
🇺🇸
Original
English🇨🇳
Translation
ChineseSoftware UX Research Skill — Quick Reference
软件UX研究技能——快速参考
Use this skill to identify problems/opportunities and de-risk decisions. Use to implement UI patterns, component changes, and design system updates.
software-ui-ux-design本技能用于识别问题/机会,降低决策风险。若需实现UI模式、组件变更和设计系统更新,请使用。
software-ui-ux-designDec 2025 Baselines (Core)
2025年12月基准要求(核心)
- Human-centred design: Iterative design + evaluation grounded in evidence (ISO 9241-210:2019) https://www.iso.org/standard/77520.html
- Usability definition: Effectiveness, efficiency, satisfaction in context (ISO 9241-11:2018) https://www.iso.org/standard/63500.html
- Accessibility baseline: WCAG 2.2 is a W3C Recommendation (12 Dec 2024) https://www.w3.org/TR/WCAG22/
- WCAG 3.0 preview: Working Draft published Sep 2025; introduces Bronze/Silver/Gold conformance tiers and enhanced cognitive accessibility; not expected before 2028-2030 https://www.w3.org/WAI/standards-guidelines/wcag/wcag3-intro/
- EU shipping note: European Accessibility Act applies to covered products/services after 28 Jun 2025 (Directive (EU) 2019/882) https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32019L0882
- 以人为本的设计:基于实证的迭代设计与评估(符合ISO 9241-210:2019标准)https://www.iso.org/standard/77520.html
- 可用性定义:特定场景下的有效性、效率与满意度(符合ISO 9241-11:2018标准)https://www.iso.org/standard/63500.html
- 无障碍基准:WCAG 2.2是W3C于2024年12月12日发布的推荐标准 https://www.w3.org/TR/WCAG22/
- WCAG 3.0预览:2025年9月发布工作草案;引入青铜/白银/黄金合规等级,并增强认知无障碍支持;预计2028-2030年前不会正式生效 https://www.w3.org/WAI/standards-guidelines/wcag/wcag3-intro/
- 欧盟相关说明:《欧洲无障碍法案》(指令(EU) 2019/882)自2025年6月28日起适用于受覆盖的产品/服务 https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32019L0882
When to Use This Skill
何时使用本技能
- Discovery: user needs, JTBD, opportunity sizing, mental models.
- Validation: concepts, prototypes, onboarding/first-run success.
- Evaluative: usability tests, heuristic evaluation, cognitive walkthroughs.
- Quant/behavioral: funnels, cohorts, instrumentation gaps, guardrails.
- Research Ops: intake, prioritization, repository/taxonomy, consent/PII handling.
- Demographic research: Age-diverse, cultural, accessibility participant recruitment.
- A/B testing: Experiment design, sample size, analysis, pitfalls.
- 发现阶段:用户需求、JTBD(Jobs To Be Done)、机会规模评估、心智模型
- 验证阶段:概念、原型、新用户引导/首次使用成功率
- 评估阶段:可用性测试、启发式评估、认知走查
- 量化/行为分析:转化漏斗、用户群组、埋点缺口、防护措施
- 研究运营:需求承接、优先级排序、研究库/分类体系、知情同意/PII处理
- 人口统计学研究:不同年龄、文化、无障碍需求的参与者招募
- A/B测试:实验设计、样本量、分析、常见陷阱
When NOT to Use This Skill
何时不应使用本技能
- UI implementation → Use software-ui-ux-design for components, patterns, code
- Analytics instrumentation → Use marketing-product-analytics for tracking plans and qa-observability for implementation patterns
- Accessibility compliance audit → Use accessibility-specific checklists (WCAG conformance)
- Marketing research → Use marketing-social-media or related marketing skills
- A/B test platform setup → Use experimentation platforms (Statsig, GrowthBook, LaunchDarkly)
- UI实现 → 请使用software-ui-ux-design处理组件、模式与代码
- 分析埋点 → 请使用marketing-product-analytics制定追踪方案,使用qa-observability获取实现模式
- 无障碍合规审计 → 使用无障碍专项检查表(WCAG合规性)
- 营销研究 → 请使用marketing-social-media或相关营销类技能
- A/B测试平台搭建 → 使用实验平台(Statsig、GrowthBook、LaunchDarkly)
Operating Mode (Core)
操作模式(核心)
If inputs are missing, ask for:
- Decision to unblock (what will change based on this research).
- Target roles/segments and top tasks.
- Platforms and contexts (web/mobile/desktop; remote/on-site; assisted tech).
- Existing evidence (analytics, tickets, reviews, recordings, prior studies).
- Constraints (timeline, recruitment access, compliance, budget).
Default outputs (pick what the user asked for):
- Research plan + output contract (prefer ../software-clean-code-standard/assets/checklists/ux-research-plan-template.md; use assets/research-plan-template.md for skill-specific detail)
- Study protocol (tasks/script + success metrics + recruitment plan)
- Findings report (issues + severity + evidence + recommendations + confidence)
- Decision brief (options + tradeoffs + recommendation + measurement plan)
若输入信息缺失,请询问以下内容:
- 待推进的决策(本次研究将带来哪些改变)
- 目标用户角色/细分群体与核心任务
- 平台与场景(网页/移动/桌面;远程/现场;辅助技术)
- 现有证据(分析数据、工单、用户评价、录制内容、过往研究)
- 约束条件(时间线、招募权限、合规要求、预算)
默认输出(根据用户需求选择):
- 研究计划+输出约定(优先使用../software-clean-code-standard/assets/checklists/ux-research-plan-template.md;如需技能专属细节,请使用assets/research-plan-template.md)
- 研究方案(任务/脚本+成功指标+招募计划)
- 研究发现报告(问题+严重程度+证据+建议+置信度)
- 决策简报(选项+权衡+建议+度量计划)
Method Chooser (Core)
方法选择器(核心)
Research Types (Keep Explicit)
研究类型(明确区分)
| Type | Goal | Primary Outputs |
|---|---|---|
| Discovery | Understand needs and context | JTBD, opportunity areas, constraints |
| Validation | Reduce solution risk | Go/no-go, prioritization signals |
| Evaluative | Improve usability/accessibility | Severity-rated issues + fixes |
| 类型 | 目标 | 主要输出 |
|---|---|---|
| 发现 | 理解需求与场景 | JTBD、机会领域、约束条件 |
| 验证 | 降低解决方案风险 | 推进/终止决策、优先级信号 |
| 评估 | 提升可用性/无障碍性 | 按严重程度分级的问题+修复方案 |
Decision Tree (Fast)
快速决策树
text
What do you need?
├─ WHY / needs / context → interviews, contextual inquiry, diary
├─ HOW / usability → moderated usability test, cognitive walkthrough, heuristic eval
├─ WHAT / scale → analytics/logs + targeted qual follow-ups
└─ WHICH / causal → experiments (if feasible) or preference teststext
你需要什么?
├─ 原因/需求/场景 → 访谈、情境调研、日记法
├─ 方式/可用性 → 有主持可用性测试、认知走查、启发式评估
├─ 规模/现状 → 分析数据/日志+针对性定性跟进
└─ 选择/因果关系 → 实验(若可行)或偏好测试Method Selection Table (Practical)
实用方法选择表
| Question | Best methods | Avoid when | Output |
|---|---|---|---|
| What problems matter most? | Interviews, contextual inquiry, diary | Only surveys/analytics | Problem framing + evidence |
| Can users complete key tasks? | Moderated usability tests, task analysis | Stakeholder review | Task success + issue list |
| Is navigation findable? | Tree test, first-click, card sort | Extremely small audience [Inference] | IA changes + labels |
| What is happening at scale? | Funnels, cohorts, logs, support taxonomy | Instrumentation missing | Baselines + segments + drop-offs |
| Which variant performs better? | A/B, switchback, holdout | Insufficient power or high risk | Decision with confidence + guardrails |
| 问题 | 最佳方法 | 避免场景 | 输出 |
|---|---|---|---|
| 哪些问题最关键? | 访谈、情境调研、日记法 | 仅使用调研/分析数据 | 问题框架+证据 |
| 用户能否完成核心任务? | 有主持可用性测试、任务分析 | 仅依赖利益相关者评审 | 任务成功率+问题列表 |
| 导航是否易于查找? | 树测试、首次点击测试、卡片分类 | 受众规模极小 [推断] | 信息架构变更+标签建议 |
| 大规模下的用户行为如何? | 转化漏斗、用户群组、日志、支持工单分类 | 缺少埋点数据 | 基准数据+用户细分+流失点 |
| 哪个变体表现更好? | A/B测试、开关测试、保留组测试 | 样本量不足或风险过高 | 带置信度的决策+防护措施 |
Research by Product Stage
按产品阶段开展研究
Stage Framework (What to Do When)
阶段框架(不同阶段的行动指南)
| Stage | Decisions | Primary Methods | Secondary Methods | Output |
|---|---|---|---|---|
| Discovery | What to build and for whom | Interviews, field/diary, journey mapping | Competitive analysis, feedback mining | Opportunity brief + JTBD |
| Concept/MVP | Does the concept work? | Concept test, prototype usability | First-click/tree test | MVP scope + onboarding plan |
| Launch | Is it usable + accessible? | Usability testing, accessibility review | Heuristic eval, session replay | Launch blockers + fixes |
| Growth | What drives adoption/value? | Segmented analytics + qual follow-ups | Churn interviews, surveys | Retention drivers + friction |
| Maturity | What to optimize/deprecate? | Experiments, longitudinal tracking | Unmoderated tests | Incremental roadmap |
| 阶段 | 决策内容 | 主要方法 | 次要方法 | 输出 |
|---|---|---|---|---|
| 发现 | 要构建什么、为谁构建 | 访谈、实地/日记法、用户旅程地图 | 竞品分析、反馈挖掘 | 机会简报+JTBD |
| 概念/MVP | 概念是否可行? | 概念测试、原型可用性测试 | 首次点击/树测试 | MVP范围+新用户引导计划 |
| 上线 | 是否可用且无障碍? | 可用性测试、无障碍评审 | 启发式评估、会话重放 | 上线阻塞问题+修复方案 |
| 增长 | 什么驱动用户Adoption/价值感知? | 细分分析+定性跟进 | 流失用户访谈、调研 | 留存驱动因素+摩擦点 |
| 成熟 | 要优化或废弃什么? | 实验、长期追踪 | 无主持测试 | 增量路线图 |
Post-Launch Measurement (What to Track)
上线后度量(需追踪的内容)
| Metric category | What it answers | Pair with |
|---|---|---|
| Adoption | Are people using it? | Outcome/value metric |
| Value | Does it help users succeed? | Adoption + qualitative reasons |
| Reliability | Does it fail in ways users notice? | Error rate + recovery success |
| Accessibility | Can diverse users complete flows? | Assistive-tech coverage + defect trends |
| 指标类别 | 解答的问题 | 搭配指标 |
|---|---|---|
| Adoption | 用户是否在使用? | 结果/价值指标 |
| 价值 | 是否帮助用户成功? | Adoption+定性原因 |
| 可靠性 | 是否存在用户可见的故障? | 错误率+恢复成功率 |
| 无障碍性 | 不同需求的用户能否完成流程? | 辅助技术覆盖范围+缺陷趋势 |
Research for Complex Systems (Workflows, Admin, Regulated)
复杂系统的研究(工作流、后台管理、受监管场景)
Complexity Indicators
复杂性指标
| Indicator | Example | Research Implication |
|---|---|---|
| Multi-step workflows | Draft → approve → publish | Task analysis + state mapping |
| Multi-role permissions | Admin vs editor vs viewer | Test each role + transitions |
| Data dependencies | Requires integrations/sync | Error-path + recovery testing |
| High stakes | Finance, healthcare | Safety checks + confirmations |
| Expert users | Dev tools, analytics | Recruit real experts (not proxies) |
| 指标 | 示例 | 研究启示 |
|---|---|---|
| 多步骤工作流 | 草稿→审批→发布 | 任务分析+状态映射 |
| 多角色权限 | 管理员 vs 编辑者 vs 查看者 | 测试每个角色及角色转换 |
| 数据依赖 | 需要集成/同步 | 错误路径+恢复测试 |
| 高风险场景 | 金融、医疗 | 安全检查+确认机制 |
| 专家用户 | 开发工具、分析平台 | 招募真实专家(而非替代者) |
Evaluation Methods (Core)
评估方法(核心)
- Contextual inquiry: observe real work and constraints.
- Task analysis: map goals → steps → failure points.
- Cognitive walkthrough: evaluate learnability and signifiers.
- Error-path testing: timeouts, offline, partial data, permission loss, retries.
- Multi-role walkthrough: simulate handoffs (creator → reviewer → admin).
- 情境调研:观察真实工作场景与约束
- 任务分析:映射目标→步骤→失败点
- 认知走查:评估易学性与标识清晰度
- 错误路径测试:超时、离线、部分数据、权限丢失、重试场景
- 多角色走查:模拟角色交接(创建者→审核者→管理员)
Multi-Role Coverage Checklist
多角色覆盖检查表
- Role-permission matrix documented.
- “No access” UX defined (request path, least-privilege defaults).
- Cross-role handoffs tested (notifications, state changes, audit history).
- Error recovery tested for each role (retry, undo, escalation).
- 已记录角色-权限矩阵
- 已定义“无权限”UX(申请路径、最小权限默认值)
- 已测试跨角色交接(通知、状态变更、审计历史)
- 已测试每个角色的错误恢复流程(重试、撤销、升级)
Research Ops & Governance (Core)
研究运营与治理(核心)
Intake (Make Requests Comparable)
需求承接(让请求具备可比性)
Minimum required fields:
- Decision to unblock and deadline.
- Research questions (primary + secondary).
- Target users/segments and recruitment constraints.
- Existing evidence and links.
- Deliverable format + audience.
必填字段:
- 待推进的决策与截止日期
- 研究问题(主要+次要)
- 目标用户/细分群体与招募约束
- 现有证据与链接
- 交付物格式+受众
Prioritization (Simple Scoring)
优先级排序(简易评分法)
Use a lightweight score to avoid backlog paralysis:
- Decision impact
- Knowledge gap
- Timing urgency
- Feasibility (recruitment + time)
使用轻量评分避免待办积压:
- 决策影响度
- 知识缺口
- 时间紧迫性
- 可行性(招募+时间)
Repository & Taxonomy
研究库与分类体系
- Store each study with: method, date, product area, roles, tasks, key findings, raw evidence links.
- Tag for reuse: problem type (navigation/forms/performance), component/pattern, funnel step.
- Prefer “atomic” findings (one insight per card) to enable recombination [Inference].
- 每项研究需存储:方法、日期、产品领域、用户角色、任务、关键发现、原始证据链接
- 用于复用的标签:问题类型(导航/表单/性能)、组件/模式、漏斗步骤
- 优先采用“原子化”发现(每个卡片对应一个洞察),便于组合使用 [推断]
Consent, PII, and Access Control
知情同意、PII与访问控制
Follow applicable privacy laws; GDPR is a primary reference for EU processing https://eur-lex.europa.eu/eli/reg/2016/679/oj
PII handling checklist:
- Collect minimum PII needed for scheduling and incentives.
- Store identity/contact separately from study data.
- Redact names/emails from transcripts before broad sharing.
- Restrict raw recordings to need-to-know access.
- Document consent, purpose, retention, and opt-out path.
遵循适用的隐私法规;GDPR是欧盟数据处理的主要参考标准 https://eur-lex.europa.eu/eli/reg/2016/679/oj
PII处理检查表:
- 仅收集用于调度与激励所需的最少PII
- 将身份/联系方式与研究数据分开存储
- 在广泛分享前,从转录内容中隐去姓名/邮箱
- 限制原始录制内容的访问权限,仅对需要的人员开放
- 记录知情同意、研究目的、数据保留期限与退订路径
Research Democratization (2026 Trend)
研究民主化(2026年趋势)
Research democratization is a recurring 2026 trend: non-researchers increasingly conduct research. Enable carefully with guardrails.
| Approach | Guardrails | Risk Level |
|---|---|---|
| Templated usability tests | Script + task templates provided | Low |
| Customer interviews by PMs | Training + review required | Medium |
| Survey design by anyone | Central review + standard questions | Medium |
| Unsupervised research | Not recommended | High |
Guardrails for non-researchers:
- Pre-approved research templates only
- Central review of findings before action
- No direct participant recruitment without ops approval
- Mandatory bias awareness training
- Clear escalation path for unexpected findings
研究民主化是2026年的持续趋势:非研究人员越来越多地开展研究。需在防护措施下谨慎推进。
| 方式 | 防护措施 | 风险等级 |
|---|---|---|
| 模板化可用性测试 | 提供脚本+任务模板 | 低 |
| 产品经理开展客户访谈 | 需培训+结果评审 | 中 |
| 全员可设计调研 | 集中评审+标准问题库 | 中 |
| 无监督研究 | 不推荐 | 高 |
非研究人员的防护规则:
- 仅使用预批准的研究模板
- 研究发现需经集中评审后方可采取行动
- 未经运营团队批准,不得直接招募参与者
- 必须参加偏见意识培训
- 为意外发现设置清晰的上报路径
Measurement & Decision Quality (Core)
度量与决策质量(核心)
Research ROI Quick Reference
研究ROI快速参考
| Research Activity | Proxy Metric | Calculation |
|---|---|---|
| Usability testing finding | Prevented dev rework | Hours saved × $150/hr |
| Discovery interview | Prevented build-wrong-thing | Sprint cost × risk reduction % |
| A/B test conclusive result | Improved conversion | (ΔConversion × Traffic × LTV) - Test cost |
| Heuristic evaluation | Early defect detection | Defects found × Cost-to-fix-later |
Rules of thumb:
- 1 usability finding that prevents 40 hours of rework = $6,000 value
- 1 discovery insight that prevents 1 wasted sprint = $50,000-100,000 value
- Research that improves conversion 0.5% on 100k visitors × $50 LTV = $25,000/month
| 研究活动 | 替代指标 | 计算方式 |
|---|---|---|
| 可用性测试发现 | 避免开发返工 | 节省工时 × $150/小时 |
| 发现阶段访谈 | 避免错误构建 | 迭代成本 × 风险降低百分比 |
| A/B测试明确结果 | 提升转化率 | (转化率变化 × 流量 × LTV) - 测试成本 |
| 启发式评估 | 早期缺陷检测 | 发现的缺陷 × 后期修复成本 |
经验法则:
- 1个可用性发现避免40小时返工 = $6,000价值
- 1个发现阶段洞察避免1个无效迭代 = $50,000-100,000价值
- 研究将转化率提升0.5%,针对10万访客×$50 LTV = $25,000/月
Triangulation Rubric
三角验证准则
| Confidence | Evidence requirement | Use for |
|---|---|---|
| High | Multiple methods or sources agree | High-impact decisions |
| Medium | Strong signal from one method + supporting indicators | Prioritization |
| Low | Single source / small sample | Exploratory hypotheses |
| 置信度 | 证据要求 | 适用场景 |
|---|---|---|
| 高 | 多种方法或来源一致 | 高影响决策 |
| 中 | 单一方法的强信号+支持性指标 | 优先级排序 |
| 低 | 单一来源/小样本 | 探索性假设 |
Adoption vs Value (Avoid Vanity Metrics)
Adoption vs 价值(避免虚荣指标)
| Metric type | Example | Common pitfall |
|---|---|---|
| Adoption | Feature usage rate | “Used” ≠ “helpful” |
| Value/outcome | Task success, goal completion | Harder to instrument |
| 指标类型 | 示例 | 常见陷阱 |
|---|---|---|
| Adoption | 功能使用率 | “使用过”≠“有帮助” |
| 价值/结果 | 任务成功率、目标完成率 | 埋点难度更高 |
When NOT to Run A/B Tests
何时不应开展A/B测试
| Situation | Why it fails | Better method |
|---|---|---|
| Low power/traffic | Inconclusive results | Usability tests + trends |
| Many variables change | Attribution impossible | Prototype tests → staged rollout |
| Need “why” | Experiments don’t explain | Interviews + observation |
| Ethical constraints | Harmful denial | Phased rollout + holdouts |
| Long-term effects | Short tests miss delayed impact | Longitudinal + retention analysis |
| 场景 | 为何失败 | 更好的方法 |
|---|---|---|
| 样本量不足/流量低 | 结果无统计学意义 | 可用性测试+趋势分析 |
| 同时变更多个变量 | 无法归因 | 原型测试→分阶段上线 |
| 需要了解“原因” | 实验无法解释动机 | 访谈+观察 |
| 伦理约束 | 可能对用户造成伤害 | 分阶段上线+保留组 |
| 长期影响 | 短期测试无法发现延迟影响 | 长期追踪+留存分析 |
Common Confounds (Call Out Early)
常见干扰因素(尽早识别)
- Selection bias (only power users respond).
- Survivorship bias (you miss churned users).
- Novelty effect (short-term lift).
- Instrumentation changes mid-test (metrics drift).
- 选择偏差(仅活跃用户参与)
- 幸存者偏差(遗漏流失用户)
- 新奇效应(短期数据提升)
- 测试期间埋点变更(指标漂移)
Optional: AI/Automation Research Considerations
可选:AI/自动化研究注意事项
Use only when researching automation/AI-powered features. Skip for traditional software UX.2026 benchmark: Trend reports consistently highlight AI-assisted analysis. Use AI for speed while keeping humans responsible for strategy and interpretation. Example reference: https://www.lyssna.com/blog/ux-research-trends/
仅在研究自动化/AI驱动的功能时使用。传统软件UX研究请跳过此部分。2026年基准:趋势报告一致强调AI辅助分析。使用AI提升效率,但需由人类负责策略与解读。示例参考:https://www.lyssna.com/blog/ux-research-trends/
Key Questions
核心问题
| Dimension | Question | Methods |
|---|---|---|
| Mental model | What do users think the system can/can’t do? | Interviews, concept tests |
| Trust calibration | When do users over/under-rely? | Scenario tests, log review |
| Explanation usefulness | Does “why” help decisions? | A/B explanation variants, interviews |
| Failure recovery | Do users recover and finish tasks? | Failure-path usability tests |
| 维度 | 问题 | 方法 |
|---|---|---|
| 心智模型 | 用户认为系统能/不能做什么? | 访谈、概念测试 |
| 信任校准 | 用户何时过度/不足依赖系统? | 场景测试、日志分析 |
| 解释有效性 | “原因说明”是否有助于决策? | A/B测试不同解释变体、访谈 |
| 故障恢复 | 用户能否从故障中恢复并完成任务? | 故障路径可用性测试 |
Error Taxonomy (User-Visible)
用户可见的错误分类
| Failure type | Typical impact | What to measure |
|---|---|---|
| Wrong output | Rework, lost trust | Verification + override rate |
| Missing output | Manual fallback | Fallback completion rate |
| Unclear output | Confusion | Clarification requests |
| Non-recoverable failure | Blocked flow | Time-to-recovery, support contact |
| 错误类型 | 典型影响 | 度量指标 |
|---|---|---|
| 错误输出 | 返工、信任流失 | 验证率+覆盖率 |
| 缺失输出 | 手动替代 | 替代流程完成率 |
| 模糊输出 | 用户困惑 | 澄清请求次数 |
| 不可恢复故障 | 流程阻塞 | 恢复时间、支持联系人数量 |
Optional: AI-Assisted Research Ops (Guardrailed)
可选:AI辅助研究运营(带防护措施)
- Use automation for transcription/tagging only after PII redaction.
- Maintain an audit trail: every theme links back to raw quotes/clips.
- 仅在PII隐去后,使用自动化进行转录/标签分类
- 保留审计追踪:每个主题都需关联原始引用/片段
Synthetic Users: When Appropriate (2026)
合成用户:适用场景(2026年)
Trend reports frequently mention synthetic/AI participants. Use with clear boundaries. Example reference: https://www.lyssna.com/blog/ux-research-trends/
| Use Case | Appropriate? | Why |
|---|---|---|
| Early concept brainstorming | WARNING: Supplement only | Generate edge cases, not validation |
| Scenario/edge case expansion | PASS Yes | Broaden coverage before real testing |
| Moderator training/practice | PASS Yes | Practice without participant burden |
| Hypothesis generation | PASS Yes | Explore directions to test with real users |
| Validation/go-no-go decisions | FAIL Never | Cannot substitute lived experience |
| Usability findings as evidence | FAIL Never | Real behavior required |
| Quotes in reports | FAIL Never | Fabricated quotes damage credibility |
Critical rule: Synthetic outputs are hypotheses, not evidence. Always validate with real users before shipping.
趋势报告频繁提及合成/AI参与者。需在明确边界内使用。示例参考:https://www.lyssna.com/blog/ux-research-trends/
| 使用场景 | 是否适用? | 原因 |
|---|---|---|
| 早期概念头脑风暴 | 警告:仅作为补充 | 生成边缘案例,而非验证 |
| 场景/边缘案例扩展 | 是 | 在真实测试前扩大覆盖范围 |
| 主持人培训/练习 | 是 | 无需真实参与者即可练习 |
| 假设生成 | 是 | 探索可用于真实用户测试的方向 |
| 验证/推进/终止决策 | 否 | 无法替代真实用户体验 |
| 可用性发现作为证据 | 否 | 需真实用户行为数据 |
| 报告中的引用 | 否 | 虚构引用会损害可信度 |
关键规则:合成输出是假设,而非证据。上线前必须通过真实用户验证。
Navigation
导航
Resources
资源
Core Research Methods:
- references/research-frameworks.md — JTBD, Kano, Double Diamond, Service Blueprint, opportunity mapping
- references/ux-audit-framework.md — Heuristic evaluation, cognitive walkthrough, severity rating
- references/usability-testing-guide.md — Task design, facilitation, analysis
- references/ux-metrics-framework.md — Task metrics, SUS/HEART, measurement guidance
- references/customer-journey-mapping.md — Journey mapping and service blueprints
- references/pain-point-extraction.md — Feedback-to-themes method
- references/review-mining-playbook.md — B2B/B2C review mining
Demographic & Quantitative Research (NEW):
- references/demographic-research-methods.md — Inclusive research for seniors, children, cultures, disabilities
- references/ab-testing-implementation.md — A/B testing deep-dive (sample size, analysis, pitfalls)
Competitive UX Analysis & Flow Patterns:
- references/competitive-ux-analysis.md — Step-by-step flow patterns from industry leaders (Wise, Revolut, Shopify, Notion, Linear, Stripe) + benchmarking methodology
Data & Sources:
- data/sources.json — Curated external references
核心研究方法:
- references/research-frameworks.md — JTBD、Kano模型、双钻模型、服务蓝图、机会映射
- references/ux-audit-framework.md — 启发式评估、认知走查、严重程度评级
- references/usability-testing-guide.md — 任务设计、主持技巧、分析方法
- references/ux-metrics-framework.md — 任务指标、SUS/HEART模型、度量指南
- references/customer-journey-mapping.md — 用户旅程地图与服务蓝图
- references/pain-point-extraction.md — 反馈到主题的提炼方法
- references/review-mining-playbook.md — B2B/B2C评价挖掘
人口统计学与量化研究(新增):
- references/demographic-research-methods.md — 针对老年人、儿童、不同文化、残障群体的包容性研究
- references/ab-testing-implementation.md — A/B测试深度指南(样本量、分析、陷阱)
竞品UX分析与流程模式:
- references/competitive-ux-analysis.md — 行业领先者的分步流程模式(Wise、Revolut、Shopify、Notion、Linear、Stripe)+ 基准测试方法
数据与来源:
- data/sources.json — 精选外部参考资料
Domain-Specific UX Benchmarking
特定领域UX基准测试
IMPORTANT: When designing UX flows for a specific domain, you MUST use WebSearch to find and suggest best-practice patterns from industry leaders.
重要提示:当为特定领域设计UX流程时,必须使用WebSearch查找并推荐行业领先者的最佳实践模式。
Trigger Conditions
触发条件
- "We're designing [flow type] for [domain]"
- "What's the best UX for [feature] in [industry]?"
- "How do [Company A, Company B] handle [flow]?"
- "Benchmark our [feature] against competitors"
- Any UX design task with identifiable domain context
- "我们正在为[领域]设计[流程类型]"
- "[功能]的最佳UX在[行业]中是怎样的?"
- "[公司A、公司B]如何处理[流程]?"
- "将我们的[功能]与竞品进行基准对比"
- 任何带有明确领域背景的UX设计任务
Domain → Leader Lookup Table
领域→领先者对照表
| Domain | Industry Leaders to Check | Key Flows |
|---|---|---|
| Fintech/Banking | Wise, Revolut, Monzo, N26, Chime, Mercury | Onboarding/KYC, money transfer, card management, spend analytics |
| E-commerce | Shopify, Amazon, Stripe Checkout | Checkout, cart, product pages, returns |
| SaaS/B2B | Linear, Notion, Figma, Slack, Airtable | Onboarding, settings, collaboration, permissions |
| Developer Tools | Stripe, Vercel, GitHub, Supabase | Docs, API explorer, dashboard, CLI |
| Consumer Apps | Spotify, Airbnb, Uber, Instagram | Discovery, booking, feed, social |
| Healthcare | Oscar, One Medical, Calm, Headspace | Appointment booking, records, compliance flows |
| EdTech | Duolingo, Coursera, Khan Academy | Onboarding, progress, gamification |
| 领域 | 需参考的行业领先者 | 核心流程 |
|---|---|---|
| 金融科技/银行 | Wise、Revolut、Monzo、N26、Chime、Mercury | 新用户引导/KYC、转账、卡片管理、支出分析 |
| 电商 | Shopify、Amazon、Stripe Checkout | 结账、购物车、商品页、退货 |
| SaaS/B2B | Linear、Notion、Figma、Slack、Airtable | 新用户引导、设置、协作、权限 |
| 开发者工具 | Stripe、Vercel、GitHub、Supabase | 文档、API explorer、仪表盘、CLI |
| 消费级应用 | Spotify、Airbnb、Uber、Instagram | 发现、预订、信息流、社交互动 |
| 医疗健康 | Oscar、One Medical、Calm、Headspace | 预约、病历、合规流程 |
| 教育科技 | Duolingo、Coursera、Khan Academy | 新用户引导、进度跟踪、游戏化 |
Required Searches
必做搜索
When user specifies a domain, execute:
- Search:
"[domain] UX best practices 2026" - Search:
"[leader company] [flow type] UX" - Search:
"[leader company] app review UX" site:mobbin.com OR site:pageflows.com - Search:
"[domain] onboarding flow examples"
当用户指定领域时,执行以下搜索:
- 搜索:
"[domain] UX best practices 2026" - 搜索:
"[leader company] [flow type] UX" - 搜索:
"[leader company] app review UX" site:mobbin.com OR site:pageflows.com - 搜索:
"[domain] onboarding flow examples"
What to Report
报告内容
After searching, provide:
- Pattern examples: Screenshots/flows from 2-3 industry leaders
- Key patterns identified: What they do well (with specifics)
- Applicable to your flow: How to adapt patterns
- Differentiation opportunity: Where you could improve on leaders
搜索完成后,提供:
- 模式示例:2-3家行业领先者的截图/流程
- 识别的核心模式:他们的优势(具体细节)
- 适用于你的流程:如何适配这些模式
- 差异化机会:你可以在哪些方面超越领先者
Example Output Format
示例输出格式
text
DOMAIN: Fintech (Money Transfer)
BENCHMARKED: Wise, Revolut
WISE PATTERNS:
- Upfront fee transparency (shows exact fee before recipient input)
- Mid-transfer rate lock (shows countdown timer)
- Delivery time estimate per payment method
- Recipient validation (bank account check before send)
REVOLUT PATTERNS:
- Instant send to Revolut users (P2P first)
- Currency conversion preview with rate comparison
- Scheduled/recurring transfers prominent
APPLY TO YOUR FLOW:
1. Add fee transparency at step 1 (not step 3)
2. Show delivery estimate per payment rail
3. Consider rate lock feature for FX transfers
DIFFERENTIATION OPPORTUNITY:
- Neither shows historical rate chart—add "is now a good time?" contexttext
领域:金融科技(转账)
基准对象:Wise、Revolut
Wise模式:
- 提前展示费用透明度(输入收款人信息前显示确切费用)
- 转账中途锁定汇率(显示倒计时)
- 按支付方式预估到账时间
- 收款人验证(转账前检查银行账户)
Revolut模式:
- 向Revolut用户即时转账(优先P2P)
- 货币兑换预览+汇率对比
- 突出显示定期/重复转账
应用到你的流程:
1. 在第一步(而非第三步)添加费用透明度
2. 按支付渠道显示到账预估
3. 考虑为外汇转账添加汇率锁定功能
差异化机会:
- 两者均未显示历史汇率图表——添加“当前是否为转账好时机?”的上下文信息Trend Awareness Protocol
趋势感知协议
IMPORTANT: When users ask recommendation questions about UX research, you MUST use WebSearch to check current trends before answering.
重要提示:当用户询问UX研究相关建议时,必须先使用WebSearch查询当前趋势再作答。
Tool/Trend Triggers
工具/趋势触发词
- "What's the best UX research tool for [use case]?"
- "What should I use for [usability testing/surveys/analytics]?"
- "What's the latest in UX research?"
- "Current best practices for [user interviews/A/B testing/accessibility]?"
- "Is [research method] still relevant in 2026?"
- "What research tools should I use?"
- "Best approach for [remote research/unmoderated testing]?"
- "针对[使用场景]的最佳UX研究工具是什么?"
- "我应该用什么工具进行[可用性测试/调研/分析]?"
- "UX研究的最新趋势是什么?"
- "[用户访谈/A/B测试/无障碍]的当前最佳实践是什么?"
- "[研究方法]在2026年是否仍然适用?"
- "我应该使用哪些研究工具?"
- "[远程研究/无主持测试]的最佳方法是什么?"
Tool/Trend Searches
工具/趋势搜索
- Search:
"UX research trends 2026" - Search:
"UX research tools best practices 2026" - Search:
"[Maze/Hotjar/UserTesting] comparison 2026" - Search:
"AI in UX research 2026"
- 搜索:
"UX research trends 2026" - 搜索:
"UX research tools best practices 2026" - 搜索:
"[Maze/Hotjar/UserTesting] comparison 2026" - 搜索:
"AI in UX research 2026"
Tool/Trend Report Format
工具/趋势报告格式
After searching, provide:
- Current landscape: What research methods/tools are popular NOW
- Emerging trends: New techniques or tools gaining traction
- Deprecated/declining: Methods that are losing effectiveness
- Recommendation: Based on fresh data and current practices
搜索完成后,提供:
- 当前格局:当前流行的研究方法/工具
- 新兴趋势:正在兴起的新技术或工具
- 已淘汰/衰退:有效性下降的方法
- 建议:基于最新数据与当前实践
Example Topics (verify with fresh search)
示例主题(需通过新鲜搜索验证)
- AI-powered research tools (Maze AI, Looppanel)
- Unmoderated testing platforms evolution
- Voice of Customer (VoC) platforms
- Analytics and behavioral tools (Hotjar, FullStory)
- Accessibility testing tools and standards
- Research repository and insight management
- AI驱动的研究工具(Maze AI、Looppanel)
- 无主持测试平台的演进
- 客户声音(VoC)平台
- 分析与行为工具(Hotjar、FullStory)
- 无障碍测试工具与标准
- 研究库与洞察管理
Templates
模板
- Shared plan template: ../software-clean-code-standard/assets/checklists/ux-research-plan-template.md — Product-agnostic research plan template (core + optional AI)
- assets/research-plan-template.md — UX research plan template
- assets/testing/usability-test-plan.md — Usability test plan
- assets/testing/usability-testing-checklist.md — Usability testing checklist
- assets/audits/heuristic-evaluation-template.md — Heuristic evaluation
- assets/audits/ux-audit-report-template.md — Audit report
- 通用计划模板:../software-clean-code-standard/assets/checklists/ux-research-plan-template.md — 产品无关的研究计划模板(核心+可选AI内容)
- assets/research-plan-template.md — UX研究计划模板
- assets/testing/usability-test-plan.md — 可用性测试计划
- assets/testing/usability-testing-checklist.md — 可用性测试检查表
- assets/audits/heuristic-evaluation-template.md — 启发式评估模板
- assets/audits/ux-audit-report-template.md — 审计报告模板