replay-ux-research
Compare original and translation side by side
🇺🇸
Original
English🇨🇳
Translation
ChineseReplay UX Research
Replay UX Research
Analyze session replays from real external users of sentry.io to surface UX patterns, pain points, and representative journeys for a given product area. This uses Sentry's own dogfooding org.
分析sentry.io真实外部用户的session replays,呈现特定产品领域的UX模式、痛点和代表性用户旅程。本技能使用Sentry内部的dogfooding组织数据。
Inputs
输入
$ARGUMENTSIf is empty, ask the user which product area to research.
$ARGUMENTS$ARGUMENTS如果为空,请询问用户要研究的产品领域。
$ARGUMENTSPrerequisites
前提条件
This skill requires the Sentry MCP server to be connected. The following tools are used:
- — Search for session replays
search_events - — Get detailed replay information
get_replay_details - — Look up error issues
search_issues - — Fetch issue details from URLs
get_sentry_resource
If these tools are not available, ask the user to connect the Sentry MCP server before proceeding.
本技能需要连接Sentry MCP服务器。将使用以下工具:
- — 搜索session replays
search_events - — 获取详细的重放信息
get_replay_details - — 查询错误问题
search_issues - — 从URL获取问题详情
get_sentry_resource
如果这些工具不可用,请先要求用户连接Sentry MCP服务器再继续。
Step 1: Map product area to URL patterns
步骤1:映射产品领域到URL模式
Read and find the URL patterns for the requested area.
references/product-areas.mdIf the product area is not listed, infer a URL pattern from the area name. Most Sentry product areas follow the pattern in the URL path. The reference file may not cover newer product areas — confirm your assumption with the user if unclear.
/<area-name>/读取,找到对应请求领域的URL模式。
references/product-areas.md如果产品领域未列出,根据领域名称推断URL模式。大多数Sentry产品领域的URL路径遵循的模式。参考文件可能未覆盖较新的产品领域——如果不确定,请与用户确认你的假设。
/<area-name>/Step 2: Search for replays
步骤2:搜索重放
Search for replays from external (non-Sentry-employee) users. 25 replays is a good starting point — go deeper if the product area is complex, if early patterns are ambiguous, or if the user wants a more comprehensive picture.
Start with last 24 hours — extend to 48h or 7d if needed to reach your target count. Run multiple calls if needed. Use per call.
search_eventslimit: 50If you can't find enough replays (fewer than 10 even at 7 days), tell the user what you found and ask them to help iterate — they may suggest broader URL patterns, a different time range, or a related product area to include.
Query construction:
Use natural language queries like:
replays from the last 24 hours where url contains "/<area-path>" excluding user emails ending in @sentry.io and @getsentry.comKey filters:
- Time range: last 24 hours (extend if needed)
- URL pattern: match the product area paths from Step 1
- Exclude employees:
-user.email:*@sentry.io -user.email:*@getsentry.com - Environment: prod
Do NOT pass a filter — replays span the whole org.
projectSlug搜索外部(非Sentry员工)用户的重放。25条重放是合适的起始数量——如果产品领域复杂、早期模式不明确,或者用户需要更全面的分析,可以进一步深入。
从最近24小时的数据开始——如果需要达到目标数量,可扩展至48小时或7天。必要时可多次调用。每次调用使用。
search_eventslimit: 50如果无法找到足够的重放(即使在7天范围内也少于10条),请告知用户你的发现并请求协助迭代——他们可能会建议更宽泛的URL模式、不同的时间范围或相关的产品领域。
查询构建:
使用自然语言查询,例如:
replays from the last 24 hours where url contains "/<area-path>" excluding user emails ending in @sentry.io and @getsentry.com关键筛选条件:
- 时间范围:最近24小时(必要时扩展)
- URL模式:匹配步骤1中获取的产品领域路径
- 排除员工:
-user.email:*@sentry.io -user.email:*@getsentry.com - 环境:prod
请勿添加筛选条件——重放数据覆盖整个组织。
projectSlugStep 3: Get replay details
步骤3:获取重放详情
Call for each replay found in Step 2. Run these calls in parallel batches for speed.
get_replay_detailsFor each replay, capture:
- User: email domain only (the API returns full emails — never include these in output)
- Journey: ordered list of pages visited (from URLs and activity breadcrumbs)
- Duration: total session length
- Replay type: (randomly sampled from normal browsing) vs
session(triggered by an event — error, manual flush, or specific user action like submitting feedback or going through checkout). Note this distinction in your analysis since buffer replays are biased toward error/action moments, not typical browsing.buffer - Entry context: first URL tells you how they arrived — look for referrer signals like ,
referrer=slack,notification_uuidin query params (Slack notification), email link patterns, or bare URLs (bookmark/direct nav)alert_rule_id - Engagement signals: error count, rage clicks, dead clicks, warning count
- Browser/OS/Device: for device distribution context
- Activity breadcrumbs: page views, navigation patterns, key interactions
为步骤2中找到的每条重放调用。为提高速度,可并行批量调用这些接口。
get_replay_details针对每条重放,捕获以下信息:
- 用户:仅保留邮箱域名(API会返回完整邮箱——切勿在输出中包含完整邮箱)
- 旅程:访问页面的有序列表(来自URL和活动面包屑)
- 时长:会话总时长
- 重放类型:(从正常浏览中随机采样) vs
session(由事件触发——错误、手动刷新或特定用户操作,如提交反馈或完成结账)。在分析中注意这种区别,因为buffer重放偏向于错误/操作时刻,而非典型浏览行为。buffer - 进入上下文:第一个URL可显示用户的来源——查询参数中的来源信号,如、
referrer=slack、notification_uuid(Slack通知)、邮件链接模式或纯URL(书签/直接导航)alert_rule_id - 参与信号:错误数量、愤怒点击、无效点击、警告数量
- 浏览器/操作系统/设备:用于设备分布上下文分析
- 活动面包屑:页面浏览、导航模式、关键交互
Step 4: Investigate significant errors
步骤4:调查重大错误
After collecting replay details, identify errors that appear in multiple replays or seem likely to affect the user experience. For each significant error:
-
Triage by frequency: If the same issue ID (e.g.,) appears in 3+ replays, it's worth investigating.
JAVASCRIPT-33RM -
Check the issue in Sentry: Useto find the issue, or
search_issueswith the issue URL from the replay details. Understand:get_sentry_resource- What is the error? (message, stack trace context)
- How many total users/events does it affect? (beyond just this replay sample)
- Is it assigned or being worked on?
-
Infer user-facing impact from behavioral signals: We cannot see the rendered page content through replay metadata — only by watching the replay in-browser. Instead, infer impact from what users did after the error:
- Retried the same action → they likely saw a failure and tried again
- Navigated away immediately → they were blocked or gave up
- Continued their flow normally → the error may be silent/cosmetic
- Rage-clicked or dead-clicked after → the UI may have become unresponsive
- Spent a long time on the page after → they may be reading an error message or confused
- No behavioral change at all → error was likely invisible to the user
-
Classify each error based on this evidence:
- Likely blocking: Error + user retried/left/couldn't continue. High confidence of user impact.
- Likely degrading: Error + user continued but with unusual behavior. Moderate confidence.
- Likely silent: Error fired but user behavior was unaffected. Low confidence of user impact.
- Unclear: Not enough behavioral signal to judge. Flag for manual replay review.
Always note the confidence level and recommend watching specific replays to confirm. Link directly to the replay URL for each classified error.
Include this classification in the Friction & Pain Points section. Don't report likely-silent errors as pain points — list them in a separate "Background Errors (likely silent)" subsection for completeness.
收集重放详情后,识别出在多条重放中出现或可能影响用户体验的错误。针对每个重大错误:
-
按频率分类:如果同一问题ID(例如:)出现在3条以上重放中,则值得深入调查。
JAVASCRIPT-33RM -
在Sentry中查看问题:使用查找问题,或使用
search_issues并传入重放详情中的问题URL。了解:get_sentry_resource- 错误是什么?(消息、堆栈跟踪上下文)
- 它影响了多少用户/事件?(超出当前重放样本范围)
- 是否已分配或正在处理中?
-
从行为信号推断用户影响:我们无法通过重放元数据查看渲染后的页面内容——只能通过在浏览器中观看重放。相反,根据用户在错误发生后的行为推断影响:
- 重试相同操作 → 他们可能看到失败并再次尝试
- 立即导航离开 → 他们被阻止或放弃
- 正常继续流程 → 错误可能是静默/外观性的
- 错误发生后出现愤怒点击或无效点击 → UI可能变得无响应
- 错误发生后在页面停留很长时间 → 他们可能在阅读错误消息或感到困惑
- 行为完全没有变化 → 错误可能对用户不可见
-
根据证据对每个错误分类:
- 可能阻塞:错误 + 用户重试/离开/无法继续。用户影响置信度高。
- 可能降级:错误 + 用户继续但行为异常。置信度中等。
- 可能静默:触发错误但用户行为未受影响。用户影响置信度低。
- 不明确:没有足够的行为信号来判断。标记为需要手动重放审查。
始终注明置信度级别,并建议观看特定重放以确认。为每个分类的错误直接链接到重放URL。
将此分类包含在“摩擦与痛点”部分。不要将可能静默的错误报告为痛点——将它们列在单独的“后台错误(可能静默)”小节中以保证完整性。
Step 5: Analyze patterns
步骤5:分析模式
Look at the replays through these UX research lenses:
从以下UX研究视角分析重放:
Behavioral patterns
行为模式
- Common journeys: What navigation paths do users take? What's the typical flow?
- Entry points: How do users arrive? Categorize: alert notification (Slack/email), direct bookmark, organic navigation from another page. The first URL's query params reveal this.
- Task completion: Did the user appear to accomplish a goal, or did they wander/abandon? Signs of completion: navigating to a detail view then leaving. Signs of abandonment: short session, back-and-forth navigation, leaving from the same page they entered.
- Time on task: How long do users spend on key pages before acting?
- 常见旅程:用户采用哪些导航路径?典型流程是什么?
- 入口点:用户如何到达该页面?分类:告警通知(Slack/邮件)、直接书签、从其他页面自然导航。第一个URL的查询参数可揭示这一点。
- 任务完成情况:用户似乎完成了目标,还是徘徊/放弃?完成的迹象:导航到详情视图后离开。放弃的迹象:会话时间短、来回导航、从进入页面直接离开。
- 任务耗时:用户在关键页面上花费多长时间才采取行动?
Friction & discovery
摩擦与发现
- Friction signals: Rage clicks, dead clicks, errors — but also hesitation (visiting the same page repeatedly), thrashing (rapid back-and-forth between pages), and retry loops (repeating the same action sequence).
- Feature discovery: Are users finding sub-features (filters, search, sort, bulk actions) or only using the primary view? Look at URL query params and breadcrumbs for evidence of feature use.
- User intent signals: Mine URL query params for search terms, filter values, sort orders, and date ranges users set. These are the closest thing to verbatim user "quotes" — they reveal what users are looking for in their own words. (e.g., tells you the user is triaging their own assigned issues.)
query=is%3Aunresolved+assigned%3Ame - Workarounds: Any unexpected navigation patterns that suggest a missing feature or confusing flow? (e.g., going to settings mid-task, opening multiple pages in sequence that could be one view)
- Error recovery: When users encounter errors, do they recover and continue or abandon?
- 摩擦信号:愤怒点击、无效点击、错误——还有犹豫(重复访问同一页面)、混乱(在页面间快速来回切换)和重试循环(重复相同的操作序列)。
- 功能发现:用户是否找到子功能(筛选器、搜索、排序、批量操作),还是仅使用主视图?查看URL查询参数和面包屑以获取功能使用的证据。
- 用户意图信号:从URL查询参数中挖掘用户设置的搜索词、筛选值、排序顺序和日期范围。这些是最接近用户“原话”的信息——它们揭示了用户正在寻找的内容(例如:表明用户正在处理自己分配的未解决问题)。
query=is%3Aunresolved+assigned%3Ame - 变通方法:是否有任何意外的导航模式表明缺少功能或流程混乱?(例如:在任务中途进入设置、按顺序打开多个本可合并为一个视图的页面)
- 错误恢复:当用户遇到错误时,他们是恢复并继续还是放弃?
Context
上下文
- Replay trigger mix: What proportion are (random sample) vs
session(event-triggered)? Buffer replays show moments where something notable happened (error, feedback submission, checkout, etc.) — they're valuable for friction analysis but aren't representative of typical browsing. Call out this bias when drawing conclusions.buffer - Return visitors: Do any email domains appear in multiple replays? Repeat visitors suggest habitual usage — their journeys may reveal power-user patterns or persistent pain points they've learned to work around.
- User diversity: Are replays from many different orgs/companies or concentrated? Are there differences in behavior by org?
- Device/browser distribution: What are users primarily using?
- Drop-off points: Where do users leave or navigate away?
- 重放触发类型占比:(随机采样)和
session(事件触发)的比例是多少?Buffer重放显示发生了显著事件的时刻(错误、反馈提交、结账等)——它们对摩擦分析很有价值,但不能代表典型浏览行为。得出结论时要指出这种偏差。buffer - 回访用户:是否有邮箱域名出现在多条重放中?回访用户表明习惯性使用——他们的旅程可能揭示高级用户模式或他们已学会变通的持续痛点。
- 用户多样性:重放来自许多不同的组织/公司还是集中在少数?不同组织的行为是否有差异?
- 设备/浏览器分布:用户主要使用什么设备/浏览器?
- 流失点:用户在哪里离开或导航到其他页面?
Step 6: Write the report
步骤6:撰写报告
Use the template in . Be specific — cite individual replays as evidence for each pattern. Link to replay URLs so the reader can watch the replay themselves.
references/output-template.mdPrivacy: Never include full user email addresses in the report. Use anonymized identifiers like "user from [company domain]" or "User A, B, C."
使用中的模板。内容要具体——引用单个重放作为每个模式的证据。链接到重放URL,以便读者可以自行观看。
references/output-template.md隐私:切勿在报告中包含完整的用户邮箱地址。使用匿名标识符,如“来自[公司域名]的用户”或“用户A、B、C”。