software-ux-research

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

Software UX Research Skill — Quick Reference

软件UX研究技能——快速参考

Use this skill to identify problems/opportunities and de-risk decisions. Use
software-ui-ux-design
to implement UI patterns, component changes, and design system updates.

本技能用于识别问题/机会,降低决策风险。若需实现UI模式、组件变更和设计系统更新,请使用
software-ui-ux-design

Dec 2025 Baselines (Core)

2025年12月基准要求(核心)

When to Use This Skill

何时使用本技能

  • Discovery: user needs, JTBD, opportunity sizing, mental models.
  • Validation: concepts, prototypes, onboarding/first-run success.
  • Evaluative: usability tests, heuristic evaluation, cognitive walkthroughs.
  • Quant/behavioral: funnels, cohorts, instrumentation gaps, guardrails.
  • Research Ops: intake, prioritization, repository/taxonomy, consent/PII handling.
  • Demographic research: Age-diverse, cultural, accessibility participant recruitment.
  • A/B testing: Experiment design, sample size, analysis, pitfalls.
  • 发现阶段:用户需求、JTBD(Jobs To Be Done)、机会规模评估、心智模型
  • 验证阶段:概念、原型、新用户引导/首次使用成功率
  • 评估阶段:可用性测试、启发式评估、认知走查
  • 量化/行为分析:转化漏斗、用户群组、埋点缺口、防护措施
  • 研究运营:需求承接、优先级排序、研究库/分类体系、知情同意/PII处理
  • 人口统计学研究:不同年龄、文化、无障碍需求的参与者招募
  • A/B测试:实验设计、样本量、分析、常见陷阱

When NOT to Use This Skill

何时不应使用本技能

  • UI implementation → Use software-ui-ux-design for components, patterns, code
  • Analytics instrumentation → Use marketing-product-analytics for tracking plans and qa-observability for implementation patterns
  • Accessibility compliance audit → Use accessibility-specific checklists (WCAG conformance)
  • Marketing research → Use marketing-social-media or related marketing skills
  • A/B test platform setup → Use experimentation platforms (Statsig, GrowthBook, LaunchDarkly)

  • UI实现 → 请使用software-ui-ux-design处理组件、模式与代码
  • 分析埋点 → 请使用marketing-product-analytics制定追踪方案,使用qa-observability获取实现模式
  • 无障碍合规审计 → 使用无障碍专项检查表(WCAG合规性)
  • 营销研究 → 请使用marketing-social-media或相关营销类技能
  • A/B测试平台搭建 → 使用实验平台(Statsig、GrowthBook、LaunchDarkly)

Operating Mode (Core)

操作模式(核心)

If inputs are missing, ask for:
  • Decision to unblock (what will change based on this research).
  • Target roles/segments and top tasks.
  • Platforms and contexts (web/mobile/desktop; remote/on-site; assisted tech).
  • Existing evidence (analytics, tickets, reviews, recordings, prior studies).
  • Constraints (timeline, recruitment access, compliance, budget).
Default outputs (pick what the user asked for):
  • Research plan + output contract (prefer ../software-clean-code-standard/assets/checklists/ux-research-plan-template.md; use assets/research-plan-template.md for skill-specific detail)
  • Study protocol (tasks/script + success metrics + recruitment plan)
  • Findings report (issues + severity + evidence + recommendations + confidence)
  • Decision brief (options + tradeoffs + recommendation + measurement plan)

若输入信息缺失,请询问以下内容:
  • 待推进的决策(本次研究将带来哪些改变)
  • 目标用户角色/细分群体与核心任务
  • 平台与场景(网页/移动/桌面;远程/现场;辅助技术)
  • 现有证据(分析数据、工单、用户评价、录制内容、过往研究)
  • 约束条件(时间线、招募权限、合规要求、预算)
默认输出(根据用户需求选择):
  • 研究计划+输出约定(优先使用../software-clean-code-standard/assets/checklists/ux-research-plan-template.md;如需技能专属细节,请使用assets/research-plan-template.md
  • 研究方案(任务/脚本+成功指标+招募计划)
  • 研究发现报告(问题+严重程度+证据+建议+置信度)
  • 决策简报(选项+权衡+建议+度量计划)

Method Chooser (Core)

方法选择器(核心)

Research Types (Keep Explicit)

研究类型(明确区分)

TypeGoalPrimary Outputs
DiscoveryUnderstand needs and contextJTBD, opportunity areas, constraints
ValidationReduce solution riskGo/no-go, prioritization signals
EvaluativeImprove usability/accessibilitySeverity-rated issues + fixes
类型目标主要输出
发现理解需求与场景JTBD、机会领域、约束条件
验证降低解决方案风险推进/终止决策、优先级信号
评估提升可用性/无障碍性按严重程度分级的问题+修复方案

Decision Tree (Fast)

快速决策树

text
What do you need?
  ├─ WHY / needs / context → interviews, contextual inquiry, diary
  ├─ HOW / usability → moderated usability test, cognitive walkthrough, heuristic eval
  ├─ WHAT / scale → analytics/logs + targeted qual follow-ups
  └─ WHICH / causal → experiments (if feasible) or preference tests
text
你需要什么?
  ├─ 原因/需求/场景 → 访谈、情境调研、日记法
  ├─ 方式/可用性 → 有主持可用性测试、认知走查、启发式评估
  ├─ 规模/现状 → 分析数据/日志+针对性定性跟进
  └─ 选择/因果关系 → 实验(若可行)或偏好测试

Method Selection Table (Practical)

实用方法选择表

QuestionBest methodsAvoid whenOutput
What problems matter most?Interviews, contextual inquiry, diaryOnly surveys/analyticsProblem framing + evidence
Can users complete key tasks?Moderated usability tests, task analysisStakeholder reviewTask success + issue list
Is navigation findable?Tree test, first-click, card sortExtremely small audience [Inference]IA changes + labels
What is happening at scale?Funnels, cohorts, logs, support taxonomyInstrumentation missingBaselines + segments + drop-offs
Which variant performs better?A/B, switchback, holdoutInsufficient power or high riskDecision with confidence + guardrails

问题最佳方法避免场景输出
哪些问题最关键?访谈、情境调研、日记法仅使用调研/分析数据问题框架+证据
用户能否完成核心任务?有主持可用性测试、任务分析仅依赖利益相关者评审任务成功率+问题列表
导航是否易于查找?树测试、首次点击测试、卡片分类受众规模极小 [推断]信息架构变更+标签建议
大规模下的用户行为如何?转化漏斗、用户群组、日志、支持工单分类缺少埋点数据基准数据+用户细分+流失点
哪个变体表现更好?A/B测试、开关测试、保留组测试样本量不足或风险过高带置信度的决策+防护措施

Research by Product Stage

按产品阶段开展研究

Stage Framework (What to Do When)

阶段框架(不同阶段的行动指南)

StageDecisionsPrimary MethodsSecondary MethodsOutput
DiscoveryWhat to build and for whomInterviews, field/diary, journey mappingCompetitive analysis, feedback miningOpportunity brief + JTBD
Concept/MVPDoes the concept work?Concept test, prototype usabilityFirst-click/tree testMVP scope + onboarding plan
LaunchIs it usable + accessible?Usability testing, accessibility reviewHeuristic eval, session replayLaunch blockers + fixes
GrowthWhat drives adoption/value?Segmented analytics + qual follow-upsChurn interviews, surveysRetention drivers + friction
MaturityWhat to optimize/deprecate?Experiments, longitudinal trackingUnmoderated testsIncremental roadmap
阶段决策内容主要方法次要方法输出
发现要构建什么、为谁构建访谈、实地/日记法、用户旅程地图竞品分析、反馈挖掘机会简报+JTBD
概念/MVP概念是否可行?概念测试、原型可用性测试首次点击/树测试MVP范围+新用户引导计划
上线是否可用且无障碍?可用性测试、无障碍评审启发式评估、会话重放上线阻塞问题+修复方案
增长什么驱动用户Adoption/价值感知?细分分析+定性跟进流失用户访谈、调研留存驱动因素+摩擦点
成熟要优化或废弃什么?实验、长期追踪无主持测试增量路线图

Post-Launch Measurement (What to Track)

上线后度量(需追踪的内容)

Metric categoryWhat it answersPair with
AdoptionAre people using it?Outcome/value metric
ValueDoes it help users succeed?Adoption + qualitative reasons
ReliabilityDoes it fail in ways users notice?Error rate + recovery success
AccessibilityCan diverse users complete flows?Assistive-tech coverage + defect trends

指标类别解答的问题搭配指标
Adoption用户是否在使用?结果/价值指标
价值是否帮助用户成功?Adoption+定性原因
可靠性是否存在用户可见的故障?错误率+恢复成功率
无障碍性不同需求的用户能否完成流程?辅助技术覆盖范围+缺陷趋势

Research for Complex Systems (Workflows, Admin, Regulated)

复杂系统的研究(工作流、后台管理、受监管场景)

Complexity Indicators

复杂性指标

IndicatorExampleResearch Implication
Multi-step workflowsDraft → approve → publishTask analysis + state mapping
Multi-role permissionsAdmin vs editor vs viewerTest each role + transitions
Data dependenciesRequires integrations/syncError-path + recovery testing
High stakesFinance, healthcareSafety checks + confirmations
Expert usersDev tools, analyticsRecruit real experts (not proxies)
指标示例研究启示
多步骤工作流草稿→审批→发布任务分析+状态映射
多角色权限管理员 vs 编辑者 vs 查看者测试每个角色及角色转换
数据依赖需要集成/同步错误路径+恢复测试
高风险场景金融、医疗安全检查+确认机制
专家用户开发工具、分析平台招募真实专家(而非替代者)

Evaluation Methods (Core)

评估方法(核心)

  • Contextual inquiry: observe real work and constraints.
  • Task analysis: map goals → steps → failure points.
  • Cognitive walkthrough: evaluate learnability and signifiers.
  • Error-path testing: timeouts, offline, partial data, permission loss, retries.
  • Multi-role walkthrough: simulate handoffs (creator → reviewer → admin).
  • 情境调研:观察真实工作场景与约束
  • 任务分析:映射目标→步骤→失败点
  • 认知走查:评估易学性与标识清晰度
  • 错误路径测试:超时、离线、部分数据、权限丢失、重试场景
  • 多角色走查:模拟角色交接(创建者→审核者→管理员)

Multi-Role Coverage Checklist

多角色覆盖检查表

  • Role-permission matrix documented.
  • “No access” UX defined (request path, least-privilege defaults).
  • Cross-role handoffs tested (notifications, state changes, audit history).
  • Error recovery tested for each role (retry, undo, escalation).

  • 已记录角色-权限矩阵
  • 已定义“无权限”UX(申请路径、最小权限默认值)
  • 已测试跨角色交接(通知、状态变更、审计历史)
  • 已测试每个角色的错误恢复流程(重试、撤销、升级)

Research Ops & Governance (Core)

研究运营与治理(核心)

Intake (Make Requests Comparable)

需求承接(让请求具备可比性)

Minimum required fields:
  • Decision to unblock and deadline.
  • Research questions (primary + secondary).
  • Target users/segments and recruitment constraints.
  • Existing evidence and links.
  • Deliverable format + audience.
必填字段:
  • 待推进的决策与截止日期
  • 研究问题(主要+次要)
  • 目标用户/细分群体与招募约束
  • 现有证据与链接
  • 交付物格式+受众

Prioritization (Simple Scoring)

优先级排序(简易评分法)

Use a lightweight score to avoid backlog paralysis:
  • Decision impact
  • Knowledge gap
  • Timing urgency
  • Feasibility (recruitment + time)
使用轻量评分避免待办积压:
  • 决策影响度
  • 知识缺口
  • 时间紧迫性
  • 可行性(招募+时间)

Repository & Taxonomy

研究库与分类体系

  • Store each study with: method, date, product area, roles, tasks, key findings, raw evidence links.
  • Tag for reuse: problem type (navigation/forms/performance), component/pattern, funnel step.
  • Prefer “atomic” findings (one insight per card) to enable recombination [Inference].
  • 每项研究需存储:方法、日期、产品领域、用户角色、任务、关键发现、原始证据链接
  • 用于复用的标签:问题类型(导航/表单/性能)、组件/模式、漏斗步骤
  • 优先采用“原子化”发现(每个卡片对应一个洞察),便于组合使用 [推断]

Consent, PII, and Access Control

知情同意、PII与访问控制

Follow applicable privacy laws; GDPR is a primary reference for EU processing https://eur-lex.europa.eu/eli/reg/2016/679/oj
PII handling checklist:
  • Collect minimum PII needed for scheduling and incentives.
  • Store identity/contact separately from study data.
  • Redact names/emails from transcripts before broad sharing.
  • Restrict raw recordings to need-to-know access.
  • Document consent, purpose, retention, and opt-out path.
遵循适用的隐私法规;GDPR是欧盟数据处理的主要参考标准 https://eur-lex.europa.eu/eli/reg/2016/679/oj
PII处理检查表:
  • 仅收集用于调度与激励所需的最少PII
  • 将身份/联系方式与研究数据分开存储
  • 在广泛分享前,从转录内容中隐去姓名/邮箱
  • 限制原始录制内容的访问权限,仅对需要的人员开放
  • 记录知情同意、研究目的、数据保留期限与退订路径

Research Democratization (2026 Trend)

研究民主化(2026年趋势)

Research democratization is a recurring 2026 trend: non-researchers increasingly conduct research. Enable carefully with guardrails.
ApproachGuardrailsRisk Level
Templated usability testsScript + task templates providedLow
Customer interviews by PMsTraining + review requiredMedium
Survey design by anyoneCentral review + standard questionsMedium
Unsupervised researchNot recommendedHigh
Guardrails for non-researchers:
  • Pre-approved research templates only
  • Central review of findings before action
  • No direct participant recruitment without ops approval
  • Mandatory bias awareness training
  • Clear escalation path for unexpected findings

研究民主化是2026年的持续趋势:非研究人员越来越多地开展研究。需在防护措施下谨慎推进。
方式防护措施风险等级
模板化可用性测试提供脚本+任务模板
产品经理开展客户访谈需培训+结果评审
全员可设计调研集中评审+标准问题库
无监督研究不推荐
非研究人员的防护规则:
  • 仅使用预批准的研究模板
  • 研究发现需经集中评审后方可采取行动
  • 未经运营团队批准,不得直接招募参与者
  • 必须参加偏见意识培训
  • 为意外发现设置清晰的上报路径

Measurement & Decision Quality (Core)

度量与决策质量(核心)

Research ROI Quick Reference

研究ROI快速参考

Research ActivityProxy MetricCalculation
Usability testing findingPrevented dev reworkHours saved × $150/hr
Discovery interviewPrevented build-wrong-thingSprint cost × risk reduction %
A/B test conclusive resultImproved conversion(ΔConversion × Traffic × LTV) - Test cost
Heuristic evaluationEarly defect detectionDefects found × Cost-to-fix-later
Rules of thumb:
  • 1 usability finding that prevents 40 hours of rework = $6,000 value
  • 1 discovery insight that prevents 1 wasted sprint = $50,000-100,000 value
  • Research that improves conversion 0.5% on 100k visitors × $50 LTV = $25,000/month
研究活动替代指标计算方式
可用性测试发现避免开发返工节省工时 × $150/小时
发现阶段访谈避免错误构建迭代成本 × 风险降低百分比
A/B测试明确结果提升转化率(转化率变化 × 流量 × LTV) - 测试成本
启发式评估早期缺陷检测发现的缺陷 × 后期修复成本
经验法则:
  • 1个可用性发现避免40小时返工 = $6,000价值
  • 1个发现阶段洞察避免1个无效迭代 = $50,000-100,000价值
  • 研究将转化率提升0.5%,针对10万访客×$50 LTV = $25,000/月

Triangulation Rubric

三角验证准则

ConfidenceEvidence requirementUse for
HighMultiple methods or sources agreeHigh-impact decisions
MediumStrong signal from one method + supporting indicatorsPrioritization
LowSingle source / small sampleExploratory hypotheses
置信度证据要求适用场景
多种方法或来源一致高影响决策
单一方法的强信号+支持性指标优先级排序
单一来源/小样本探索性假设

Adoption vs Value (Avoid Vanity Metrics)

Adoption vs 价值(避免虚荣指标)

Metric typeExampleCommon pitfall
AdoptionFeature usage rate“Used” ≠ “helpful”
Value/outcomeTask success, goal completionHarder to instrument
指标类型示例常见陷阱
Adoption功能使用率“使用过”≠“有帮助”
价值/结果任务成功率、目标完成率埋点难度更高

When NOT to Run A/B Tests

何时不应开展A/B测试

SituationWhy it failsBetter method
Low power/trafficInconclusive resultsUsability tests + trends
Many variables changeAttribution impossiblePrototype tests → staged rollout
Need “why”Experiments don’t explainInterviews + observation
Ethical constraintsHarmful denialPhased rollout + holdouts
Long-term effectsShort tests miss delayed impactLongitudinal + retention analysis
场景为何失败更好的方法
样本量不足/流量低结果无统计学意义可用性测试+趋势分析
同时变更多个变量无法归因原型测试→分阶段上线
需要了解“原因”实验无法解释动机访谈+观察
伦理约束可能对用户造成伤害分阶段上线+保留组
长期影响短期测试无法发现延迟影响长期追踪+留存分析

Common Confounds (Call Out Early)

常见干扰因素(尽早识别)

  • Selection bias (only power users respond).
  • Survivorship bias (you miss churned users).
  • Novelty effect (short-term lift).
  • Instrumentation changes mid-test (metrics drift).

  • 选择偏差(仅活跃用户参与)
  • 幸存者偏差(遗漏流失用户)
  • 新奇效应(短期数据提升)
  • 测试期间埋点变更(指标漂移)

Optional: AI/Automation Research Considerations

可选:AI/自动化研究注意事项

Use only when researching automation/AI-powered features. Skip for traditional software UX.
2026 benchmark: Trend reports consistently highlight AI-assisted analysis. Use AI for speed while keeping humans responsible for strategy and interpretation. Example reference: https://www.lyssna.com/blog/ux-research-trends/
仅在研究自动化/AI驱动的功能时使用。传统软件UX研究请跳过此部分。
2026年基准:趋势报告一致强调AI辅助分析。使用AI提升效率,但需由人类负责策略与解读。示例参考:https://www.lyssna.com/blog/ux-research-trends/

Key Questions

核心问题

DimensionQuestionMethods
Mental modelWhat do users think the system can/can’t do?Interviews, concept tests
Trust calibrationWhen do users over/under-rely?Scenario tests, log review
Explanation usefulnessDoes “why” help decisions?A/B explanation variants, interviews
Failure recoveryDo users recover and finish tasks?Failure-path usability tests
维度问题方法
心智模型用户认为系统能/不能做什么?访谈、概念测试
信任校准用户何时过度/不足依赖系统?场景测试、日志分析
解释有效性“原因说明”是否有助于决策?A/B测试不同解释变体、访谈
故障恢复用户能否从故障中恢复并完成任务?故障路径可用性测试

Error Taxonomy (User-Visible)

用户可见的错误分类

Failure typeTypical impactWhat to measure
Wrong outputRework, lost trustVerification + override rate
Missing outputManual fallbackFallback completion rate
Unclear outputConfusionClarification requests
Non-recoverable failureBlocked flowTime-to-recovery, support contact
错误类型典型影响度量指标
错误输出返工、信任流失验证率+覆盖率
缺失输出手动替代替代流程完成率
模糊输出用户困惑澄清请求次数
不可恢复故障流程阻塞恢复时间、支持联系人数量

Optional: AI-Assisted Research Ops (Guardrailed)

可选:AI辅助研究运营(带防护措施)

  • Use automation for transcription/tagging only after PII redaction.
  • Maintain an audit trail: every theme links back to raw quotes/clips.
  • 仅在PII隐去后,使用自动化进行转录/标签分类
  • 保留审计追踪:每个主题都需关联原始引用/片段

Synthetic Users: When Appropriate (2026)

合成用户:适用场景(2026年)

Trend reports frequently mention synthetic/AI participants. Use with clear boundaries. Example reference: https://www.lyssna.com/blog/ux-research-trends/
Use CaseAppropriate?Why
Early concept brainstormingWARNING: Supplement onlyGenerate edge cases, not validation
Scenario/edge case expansionPASS YesBroaden coverage before real testing
Moderator training/practicePASS YesPractice without participant burden
Hypothesis generationPASS YesExplore directions to test with real users
Validation/go-no-go decisionsFAIL NeverCannot substitute lived experience
Usability findings as evidenceFAIL NeverReal behavior required
Quotes in reportsFAIL NeverFabricated quotes damage credibility
Critical rule: Synthetic outputs are hypotheses, not evidence. Always validate with real users before shipping.

趋势报告频繁提及合成/AI参与者。需在明确边界内使用。示例参考:https://www.lyssna.com/blog/ux-research-trends/
使用场景是否适用?原因
早期概念头脑风暴警告:仅作为补充生成边缘案例,而非验证
场景/边缘案例扩展在真实测试前扩大覆盖范围
主持人培训/练习无需真实参与者即可练习
假设生成探索可用于真实用户测试的方向
验证/推进/终止决策无法替代真实用户体验
可用性发现作为证据需真实用户行为数据
报告中的引用虚构引用会损害可信度
关键规则:合成输出是假设,而非证据。上线前必须通过真实用户验证。

Navigation

导航

Resources

资源

Core Research Methods:
  • references/research-frameworks.md — JTBD, Kano, Double Diamond, Service Blueprint, opportunity mapping
  • references/ux-audit-framework.md — Heuristic evaluation, cognitive walkthrough, severity rating
  • references/usability-testing-guide.md — Task design, facilitation, analysis
  • references/ux-metrics-framework.md — Task metrics, SUS/HEART, measurement guidance
  • references/customer-journey-mapping.md — Journey mapping and service blueprints
  • references/pain-point-extraction.md — Feedback-to-themes method
  • references/review-mining-playbook.md — B2B/B2C review mining
Demographic & Quantitative Research (NEW):
  • references/demographic-research-methods.md — Inclusive research for seniors, children, cultures, disabilities
  • references/ab-testing-implementation.md — A/B testing deep-dive (sample size, analysis, pitfalls)
Competitive UX Analysis & Flow Patterns:
  • references/competitive-ux-analysis.mdStep-by-step flow patterns from industry leaders (Wise, Revolut, Shopify, Notion, Linear, Stripe) + benchmarking methodology
Data & Sources:
  • data/sources.json — Curated external references

核心研究方法:
  • references/research-frameworks.md — JTBD、Kano模型、双钻模型、服务蓝图、机会映射
  • references/ux-audit-framework.md — 启发式评估、认知走查、严重程度评级
  • references/usability-testing-guide.md — 任务设计、主持技巧、分析方法
  • references/ux-metrics-framework.md — 任务指标、SUS/HEART模型、度量指南
  • references/customer-journey-mapping.md — 用户旅程地图与服务蓝图
  • references/pain-point-extraction.md — 反馈到主题的提炼方法
  • references/review-mining-playbook.md — B2B/B2C评价挖掘
人口统计学与量化研究(新增):
  • references/demographic-research-methods.md — 针对老年人、儿童、不同文化、残障群体的包容性研究
  • references/ab-testing-implementation.md — A/B测试深度指南(样本量、分析、陷阱)
竞品UX分析与流程模式:
  • references/competitive-ux-analysis.md行业领先者的分步流程模式(Wise、Revolut、Shopify、Notion、Linear、Stripe)+ 基准测试方法
数据与来源:
  • data/sources.json — 精选外部参考资料

Domain-Specific UX Benchmarking

特定领域UX基准测试

IMPORTANT: When designing UX flows for a specific domain, you MUST use WebSearch to find and suggest best-practice patterns from industry leaders.
重要提示:当为特定领域设计UX流程时,必须使用WebSearch查找并推荐行业领先者的最佳实践模式。

Trigger Conditions

触发条件

  • "We're designing [flow type] for [domain]"
  • "What's the best UX for [feature] in [industry]?"
  • "How do [Company A, Company B] handle [flow]?"
  • "Benchmark our [feature] against competitors"
  • Any UX design task with identifiable domain context
  • "我们正在为[领域]设计[流程类型]"
  • "[功能]的最佳UX在[行业]中是怎样的?"
  • "[公司A、公司B]如何处理[流程]?"
  • "将我们的[功能]与竞品进行基准对比"
  • 任何带有明确领域背景的UX设计任务

Domain → Leader Lookup Table

领域→领先者对照表

DomainIndustry Leaders to CheckKey Flows
Fintech/BankingWise, Revolut, Monzo, N26, Chime, MercuryOnboarding/KYC, money transfer, card management, spend analytics
E-commerceShopify, Amazon, Stripe CheckoutCheckout, cart, product pages, returns
SaaS/B2BLinear, Notion, Figma, Slack, AirtableOnboarding, settings, collaboration, permissions
Developer ToolsStripe, Vercel, GitHub, SupabaseDocs, API explorer, dashboard, CLI
Consumer AppsSpotify, Airbnb, Uber, InstagramDiscovery, booking, feed, social
HealthcareOscar, One Medical, Calm, HeadspaceAppointment booking, records, compliance flows
EdTechDuolingo, Coursera, Khan AcademyOnboarding, progress, gamification
领域需参考的行业领先者核心流程
金融科技/银行Wise、Revolut、Monzo、N26、Chime、Mercury新用户引导/KYC、转账、卡片管理、支出分析
电商Shopify、Amazon、Stripe Checkout结账、购物车、商品页、退货
SaaS/B2BLinear、Notion、Figma、Slack、Airtable新用户引导、设置、协作、权限
开发者工具Stripe、Vercel、GitHub、Supabase文档、API explorer、仪表盘、CLI
消费级应用Spotify、Airbnb、Uber、Instagram发现、预订、信息流、社交互动
医疗健康Oscar、One Medical、Calm、Headspace预约、病历、合规流程
教育科技Duolingo、Coursera、Khan Academy新用户引导、进度跟踪、游戏化

Required Searches

必做搜索

When user specifies a domain, execute:
  1. Search:
    "[domain] UX best practices 2026"
  2. Search:
    "[leader company] [flow type] UX"
  3. Search:
    "[leader company] app review UX" site:mobbin.com OR site:pageflows.com
  4. Search:
    "[domain] onboarding flow examples"
当用户指定领域时,执行以下搜索:
  1. 搜索:
    "[domain] UX best practices 2026"
  2. 搜索:
    "[leader company] [flow type] UX"
  3. 搜索:
    "[leader company] app review UX" site:mobbin.com OR site:pageflows.com
  4. 搜索:
    "[domain] onboarding flow examples"

What to Report

报告内容

After searching, provide:
  • Pattern examples: Screenshots/flows from 2-3 industry leaders
  • Key patterns identified: What they do well (with specifics)
  • Applicable to your flow: How to adapt patterns
  • Differentiation opportunity: Where you could improve on leaders
搜索完成后,提供:
  • 模式示例:2-3家行业领先者的截图/流程
  • 识别的核心模式:他们的优势(具体细节)
  • 适用于你的流程:如何适配这些模式
  • 差异化机会:你可以在哪些方面超越领先者

Example Output Format

示例输出格式

text
DOMAIN: Fintech (Money Transfer)
BENCHMARKED: Wise, Revolut

WISE PATTERNS:
- Upfront fee transparency (shows exact fee before recipient input)
- Mid-transfer rate lock (shows countdown timer)
- Delivery time estimate per payment method
- Recipient validation (bank account check before send)

REVOLUT PATTERNS:
- Instant send to Revolut users (P2P first)
- Currency conversion preview with rate comparison
- Scheduled/recurring transfers prominent

APPLY TO YOUR FLOW:
1. Add fee transparency at step 1 (not step 3)
2. Show delivery estimate per payment rail
3. Consider rate lock feature for FX transfers

DIFFERENTIATION OPPORTUNITY:
- Neither shows historical rate chart—add "is now a good time?" context

text
领域:金融科技(转账)
基准对象:Wise、Revolut

Wise模式:
- 提前展示费用透明度(输入收款人信息前显示确切费用)
- 转账中途锁定汇率(显示倒计时)
- 按支付方式预估到账时间
- 收款人验证(转账前检查银行账户)

Revolut模式:
- 向Revolut用户即时转账(优先P2P)
- 货币兑换预览+汇率对比
- 突出显示定期/重复转账

应用到你的流程:
1. 在第一步(而非第三步)添加费用透明度
2. 按支付渠道显示到账预估
3. 考虑为外汇转账添加汇率锁定功能

差异化机会:
- 两者均未显示历史汇率图表——添加“当前是否为转账好时机?”的上下文信息

Trend Awareness Protocol

趋势感知协议

IMPORTANT: When users ask recommendation questions about UX research, you MUST use WebSearch to check current trends before answering.
重要提示:当用户询问UX研究相关建议时,必须先使用WebSearch查询当前趋势再作答。

Tool/Trend Triggers

工具/趋势触发词

  • "What's the best UX research tool for [use case]?"
  • "What should I use for [usability testing/surveys/analytics]?"
  • "What's the latest in UX research?"
  • "Current best practices for [user interviews/A/B testing/accessibility]?"
  • "Is [research method] still relevant in 2026?"
  • "What research tools should I use?"
  • "Best approach for [remote research/unmoderated testing]?"
  • "针对[使用场景]的最佳UX研究工具是什么?"
  • "我应该用什么工具进行[可用性测试/调研/分析]?"
  • "UX研究的最新趋势是什么?"
  • "[用户访谈/A/B测试/无障碍]的当前最佳实践是什么?"
  • "[研究方法]在2026年是否仍然适用?"
  • "我应该使用哪些研究工具?"
  • "[远程研究/无主持测试]的最佳方法是什么?"

Tool/Trend Searches

工具/趋势搜索

  1. Search:
    "UX research trends 2026"
  2. Search:
    "UX research tools best practices 2026"
  3. Search:
    "[Maze/Hotjar/UserTesting] comparison 2026"
  4. Search:
    "AI in UX research 2026"
  1. 搜索:
    "UX research trends 2026"
  2. 搜索:
    "UX research tools best practices 2026"
  3. 搜索:
    "[Maze/Hotjar/UserTesting] comparison 2026"
  4. 搜索:
    "AI in UX research 2026"

Tool/Trend Report Format

工具/趋势报告格式

After searching, provide:
  • Current landscape: What research methods/tools are popular NOW
  • Emerging trends: New techniques or tools gaining traction
  • Deprecated/declining: Methods that are losing effectiveness
  • Recommendation: Based on fresh data and current practices
搜索完成后,提供:
  • 当前格局:当前流行的研究方法/工具
  • 新兴趋势:正在兴起的新技术或工具
  • 已淘汰/衰退:有效性下降的方法
  • 建议:基于最新数据与当前实践

Example Topics (verify with fresh search)

示例主题(需通过新鲜搜索验证)

  • AI-powered research tools (Maze AI, Looppanel)
  • Unmoderated testing platforms evolution
  • Voice of Customer (VoC) platforms
  • Analytics and behavioral tools (Hotjar, FullStory)
  • Accessibility testing tools and standards
  • Research repository and insight management

  • AI驱动的研究工具(Maze AI、Looppanel)
  • 无主持测试平台的演进
  • 客户声音(VoC)平台
  • 分析与行为工具(Hotjar、FullStory)
  • 无障碍测试工具与标准
  • 研究库与洞察管理

Templates

模板

  • Shared plan template: ../software-clean-code-standard/assets/checklists/ux-research-plan-template.md — Product-agnostic research plan template (core + optional AI)
  • assets/research-plan-template.md — UX research plan template
  • assets/testing/usability-test-plan.md — Usability test plan
  • assets/testing/usability-testing-checklist.md — Usability testing checklist
  • assets/audits/heuristic-evaluation-template.md — Heuristic evaluation
  • assets/audits/ux-audit-report-template.md — Audit report
  • 通用计划模板:../software-clean-code-standard/assets/checklists/ux-research-plan-template.md — 产品无关的研究计划模板(核心+可选AI内容)
  • assets/research-plan-template.md — UX研究计划模板
  • assets/testing/usability-test-plan.md — 可用性测试计划
  • assets/testing/usability-testing-checklist.md — 可用性测试检查表
  • assets/audits/heuristic-evaluation-template.md — 启发式评估模板
  • assets/audits/ux-audit-report-template.md — 审计报告模板